The Cylc Suite Engine
User Guide
5.4.5
GNU GPL v3.0 Software License
Copyright (C) 2008-2013 Hilary Oliver, NIWA

Hilary Oliver

December 17, 2013
PIC
PIC

Contents

1 Introduction: How Cylc Works
2 Cylc Screenshots
3 Required Software
4 Installation
5 On The Meaning Of Cycle Time In Cylc
6 Site And User Configuration Files
7 Tutorial
8 Suite Name Registration And Passphrases
9 Suite Definition
10 Task Implementation
11 Task Job Submission, Poll and Kill
12 Running Suites
13 Other Topics In Brief
14 Suite Storage, Discovery, Revision Control, and Deployment
15 Suite Design Principles
16 Style Guide
A Suite.rc Reference
B Site And User Config File Reference
C Command Reference
D The Cylc Lockserver
E The gcylc Graph View
F Cylc Project README File
G Cylc Project INSTALL File
H Cylc Development History - Major Changes
I Pyro
J GNU GENERAL PUBLIC LICENSE v3.0

1 Introduction: How Cylc Works

 1.1 Scheduling Forecast Suites
 1.2 EcoConnect
 1.3 Dependence Between Tasks
 1.4 The Cylc Scheduling Algorithm

1.1 Scheduling Forecast Suites

Environmental forecasting suites generate forecast products from a potentially large group of interdependent scientific models and associated data processing tasks. They are constrained by availability of external driving data: typically one or more tasks will wait on real time observations and/or model data from an external system, and these will drive other downstream tasks, and so on. The dependency diagram for a single forecast cycle in such a system is a Directed Acyclic Graph as shown in Figure 1 (in our terminology, a forecast cycle is comprised of all tasks with a common cycle time, which is the nominal analysis time or start time of the forecast models in the group). In real time operation processing will consist of a series of distinct forecast cycles that are each initiated, after a gap, by arrival of the new cycle’s external driving data.

From a job scheduling perspective task execution order in such a system must be carefully controlled in order to avoid dependency violations. Ideally, each task should be queued for execution at the instant its last prerequisite is satisfied; this is the best that can be done even if queued tasks are not able to execute immediately because of resource contention.

1.2 EcoConnect

Cylc was developed for the EcoConnect Forecasting System at NIWA (National Institute of Water and Atmospheric Research, New Zealand). EcoConnect takes real time atmospheric and stream flow observations, and operational global weather forecasts from the Met Office (UK), and uses these to drive global sea state and regional data assimilating weather models, which in turn drive regional sea state, storm surge, and catchment river models, plus tide prediction, and a large number of associated data collection, quality control, preprocessing, post-processing, product generation, and archiving tasks.1 The global sea state forecast runs once daily. The regional weather forecast runs four times daily but it supplies surface winds and pressure to several downstream models that run only twice daily, and precipitation accumulations to catchment river models that run on an hourly cycle assimilating real time stream flow observations and using the most recently available regional weather forecast. EcoConnect runs on heterogeneous distributed hardware, including a massively parallel supercomputer and several Linux servers.

1.3 Dependence Between Tasks

1.3.1 Intra-cycle Dependence

Most dependence between tasks applies within a single forecast cycle. Figure 1 shows the dependency diagram for a single forecast cycle of a simple example suite of three forecast models (a, b, and c) and three post processing or product generation tasks (d, e and f). A scheduler capable of handling this must manage, within a single forecast cycle, multiple parallel streams of execution that branch when one task generates output for several downstream tasks, and merge when one task takes input from several upstream tasks.


PIC


Figure 1: The dependency graph for a single forecast cycle of a simple example suite. Tasks a, b, and c represent forecast models, d, e and f are post processing or product generation tasks, and x represents external data that the upstream forecast model depends on.



PIC


Figure 2: The optimal job schedule for two consecutive cycles of our example suite during real time operation, assuming that all tasks trigger off upstream tasks finishing completely. The horizontal extent of a task bar represents its execution time, and the vertical blue lines show when the external driving data becomes available.


Figure 2 shows the optimal job schedule for two consecutive cycles of the example suite in real time operation, given execution times represented by the horizontal extent of the task bars. There is a time gap between cycles as the suite waits on new external driving data. Each task in the example suite happens to trigger off upstream tasks finishing, rather than off any intermediate output or event; this is merely a simplification that makes for clearer diagrams.


PIC


Figure 3: If the external driving data is available in advance, can we start running the next cycle early?



PIC


Figure 4: A naive attempt to overlap two consecutive cycles using the single-cycle dependency graph. The red shaded tasks will fail because of dependency violations (or will not be able to run because of upstream dependency violations).



PIC


Figure 5: The best that can be done in general when inter-cycle dependence is ignored.


Now the question arises, what happens if the external driving data for upcoming cycles is available in advance, as it would be after a significant delay in operations, or when running a historical case study? While the forecast model a appears to depend only on the external data x at this stage of the discussion, in fact it would typically also depend on its own previous instance for the model background state used in initializing the new forecast. Thus, as alluded to in Figure 3, task a could in principle start as soon as its predecessor has finished. Figure 4 shows, however, that starting a whole new cycle at this point is dangerous - it results in dependency violations in half of the tasks in the example suite. In fact the situation could be even worse than this - imagine that task b in the first cycle is delayed for some reason after the second cycle has been launched. Clearly we must consider handling inter-cycle dependence explicitly or else agree not to start the next cycle early, as is illustrated in Figure 5.

1.3.2 Inter-cycle Dependence

Forecast models typically depend on their own most recent previous forecast for background state or restart files of some kind (this is called warm cycling) but there can also be inter-cycle dependence between different tasks. In an atmospheric forecast analysis suite, for instance, the weather model may generate background states for observation processing and data-assimilation tasks in the next cycle as well as for then next forecast model run. In real time operation inter-cycle dependence can be ignored because it is automatically satisfied when one cycle finishes before the next begins. If it is not ignored it drastically complicates the dependency graph by blurring the clean boundary between cycles. Figure 6 illustrates the problem for our simple example suite assuming minimal inter-cycle dependence: the warm cycled models (a, b, and c) each depend on their own previous instances.

For this reason, and because we tend to see forecasting suites in terms of by their real time characteristics, other metaschedulers have ignored inter-cycle dependence and are thus restricted to running entire cycles in sequence at all times. This does not affect normal real time operation but it can be a serious impediment when advance availability of external driving data makes it possible, in principle, to run some tasks from upcoming cycles before the current cycle is finished - as was suggested at the end of the previous section. This can occur, for instance, after operational delays (late arrival of external data, system maintenance, etc.) and to an even greater extent in historical case studies and parallel test suites started behind a real time operation. It can be a serious problem for suites that have little downtime between forecast cycles and therefore take many cycles to catch up after a delay. Without taking account of inter-cycle dependence, the best that can be done, in general, is to reduce the gap between cycles to zero as shown in Figure 5. A limited crude overlap of the single cycle job schedule may be possible for specific task sets but the allowable overlap may change if new tasks are added, and it is still dangerous: it amounts to running different parts of a dependent system as if they were not dependent and as such it cannot be guaranteed that some unforeseen delay in one cycle, after the next cycle has begun, (e.g. due to resource contention or task failures) won’t result in dependency violations.


PIC


Figure 6: The complete dependency graph for the example suite, assuming the least possible inter-cycle dependence: the forecast models (a, b, and c) depend on their own previous instances. The dashed arrows show connections to previous and subsequent forecast cycles.



PIC


Figure 7: The optimal two cycle job schedule when the next cycle’s driving data is available in advance, possible in principle when inter-cycle dependence is handled explicitly.


Figure 7 shows, in contrast to Figure 4, the optimal two cycle job schedule obtained by respecting all inter-cycle dependence. This assumes no delays due to resource contention or otherwise - i.e. every task runs as soon as it is ready to run. The scheduler running this suite must be able to adapt dynamically to external conditions that impact on multi-cycle scheduling in the presence of inter-cycle dependence or else, again, risk bringing the system down with dependency violations.


PIC


Figure 8: Job schedules for the example suite after a delay of almost one whole forecast cycle, when inter-cycle dependence is taken into account (above the time axis), and when it is not (below the time axis). The colored lines indicate the time that each cycle is delayed, and normal “caught up” cycles are shaded gray.



PIC


Figure 9: Job schedules for the example suite in case study mode, or after a long delay, when the external driving data are available many cycles in advance. Above the time axis is the optimal schedule obtained when the suite is constrained only by its true dependencies, as in Figure 3, and underneath is the best that can be done, in general, when inter-cycle dependence is ignored.


To further illustrate the potential benefits of proper inter-cycle dependency handling, Figure 8 shows an operational delay of almost one whole cycle in a suite with little downtime between cycles. Above the time axis is the optimal schedule that is possible in principle when inter-cycle dependence is taken into account, and below it is the only safe schedule possible in general when it is ignored. In the former case, even the cycle immediately after the delay is hardly affected, and subsequent cycles are all on time, whilst in the latter case it takes five full cycles to catch up to normal real time operation.

Similarly, Figure 9 shows example suite job schedules for an historical case study, or when catching up after a very long delay; i.e. when the external driving data are available many cycles in advance. Task a, which as the most upstream forecast model is likely to be a resource intensive atmosphere or ocean model, has no upstream dependence on co-temporal tasks and can therefore run continuously, regardless of how much downstream processing is yet to be completed in its own, or any previous, forecast cycle (actually, task a does depend on co-temporal task x which waits on the external driving data, but that returns immediately when the data is available in advance, so the result stands). The other forecast models can also cycle continuously or with a short gap between, and some post processing tasks, which have no previous-instance dependence, can run continuously or even overlap (e.g. e in this case). Thus, even for this very simple example suite, tasks from three or four different cycles can in principle run simultaneously at any given time. In fact, if our tasks are able to trigger off internal outputs of upstream tasks, rather than waiting on full completion, successive instances of the forecast models could overlap as well (because model restart outputs are generally completed early in the forecast) for an even more efficient job schedule.

1.4 The Cylc Scheduling Algorithm


PIC


Figure 10: How cylc sees a suite, in contrast to the multi-cycle dependency graph of Figure 6. Task colors represent different cycle times, and the small squares and circles represent different prerequisites and outputs. A task can run when its prerequisites are satisfied by the outputs of other tasks in the pool.


Cylc manages a pool of proxy objects that represent the real tasks in a suite. Task proxies know how to run the real tasks that they represent, and they receive progress messages from the tasks as they run (usually reports of completed outputs). There is no global cycling mechanism to advance the suite; instead individual task proxies have their own private cycle time and spawn their own successors when the time is right. Task proxies are self-contained - they know their own prerequisites and outputs but are not aware of the wider suite. Inter-cycle dependence is not treated as special, and the task pool can be populated with tasks with many different cycle times. The task pool is illustrated in Figure 10. Whenever any task changes state due to completion of an output, every task checks to see if its own prerequisites have been satisfied. In effect, cylc gets a pool of tasks to self-organize by negotiating their own dependencies so that optimal scheduling, as described in the previous section, emerges naturally at run time.

2 Cylc Screenshots


PIC]


Figure 11: A cylc suite definition in the vim editor.



PIC


Figure 12: gcylc dot and text views.



PIC


Figure 13: gcylc graph and text views.



PIC


Figure 14: A large suite graphed by cylc.


3 Required Software

 3.1 Known Version Compatibility Issues
 3.2 Other Software Used Internally By Cylc

The following packages are technically optional as you can construct and run cylc suites without dependency graphing, the gcylc GUI, or template processing but this is not recommended, and without Jinja2 you will not be able to run many of the example suites:

If you use a binary package manager to install graphviz you may also need a couple of devel packages for the pygraphviz build:

This user guide can be generated from the LATEXsource by running make in the top level cylc directory after download. The following TEXpackages are required (but note that the exact packages required may be somewhat OS or distribution-dependent):

And for HTML versions of the User Guide:

Finally, cylc makes heavy use of Python ordered dictionary data structures. Significant speedup in parsing large suites can be had by installing the fast C-coded ordereddict module by Anthon van der Neut:

This module is currently included with cylc under $CYLC_DIR/ext, and is built by the top level cylc Makefile. If you install the resulting library appropriately cylc will automatically use it in place of a slower Python implementation of the ordered dictionary structure.

3.1 Known Version Compatibility Issues

Cylc should run “out of the box” on recent Linux distributions.

For distributed suites the Pyro versions installed on all suite or task hosts must be mutually compatible. Using identical Pyro versions guarantees compatibility but may not be strictly necessary because cylc uses Pyro rather minimally.

3.1.1 Pyro 3.9 and Earlier

Beware of Linux distributions that come packaged with old Pyro versions. Pyro 3.9 and earlier is not compatible with the new-style Python classes used in cylc. It has been reported that Ubuntu 10.04 (Lucid Lynx), released in September 2009, suffers from this problem. Surprisingly, so does Ubuntu 11.10 (Oneiric Ocelot), released in October 2011 - and therefore, presumably, all earlier Ubuntu releases. Attempting to run a suite with Pyro 3.9 or earlier installed results in the following Python traceback:

 
Traceback (most recent call last): 
 File "/home/hilary/cylc/bin/_run", line 232, in <module> 
 server = start() 
 File "/home/hilary/cylc/bin/_run", line 92, in __init__ 
 scheduler.__init__( self ) 
 File "/home/hilary/cylc/lib/cylc/scheduler.py", line 141, in 
__init__ 
 self.load_tasks() 
 File "/home/hilary/cylc/bin/_run", line 141, in load_tasks_cold 
 itask = self.config.get_task_proxy( name, tag, 'waiting', 
stopctime=None, startup=True ) 
 File "/home/hilary/cylc/lib/cylc/config.py", line 1252, in 
get_task_proxy 
 return self.taskdefs[name].get_task_class()( ctime, state, 
stopctime, startup ) 
 File "/home/hilary/cylc/lib/cylc/taskdef.py", line 453, in 
tclass_init 
 print '-', sself.__class__.__name__, sself.__class__.__bases_ 
AttributeError: type object 'A' has no attribute '_taskdef__bases_' 
_run --debug testsuite.1322742021 2010010106 failed: 1

3.1.2 Apple Mac OSX

It has been reported that cylc runs fine on OSX 10.6 SnowLeopard, but on OSX 10.7 Lion there is an issue with constructing proper FQDNs (Fully Qualified Domain Names) that requires a change to the DNS service. Here’s how to solve the problem:

3.2 Other Software Used Internally By Cylc

Cylc has incorporated a custom-modified version the xdot graph viewer (http://code.google.com/p/jrfonseca/wiki/XDot, LGPL license).

4 Installation

 4.1 Install The External Dependencies
 4.2 Install Cylc
 4.3 Automated Tests
 4.4 Local User Installation
 4.5 Upgrading To New Cylc Versions

4.1 Install The External Dependencies

First install Pyro, graphviz, Pygraphviz, Jinja2, TEX, and ImageMagick using the package manager on your system if possible; otherwise download the packages manually and follow their native installation documentation. On a modern Linux system, this is very easy. For example, to install cylc-5.1.0 on the Fedora 18 Linux distribution:

 
shell$ yum install graphviz       # (2.28) 
shell$ yum install graphviz-devel # (for pgraphviz build) 
shell$ yum install python-devel   # (ditto) 
 
# TeX packages, and ImageMagick, for generating the Cylc User Guide: 
shell$ yum install texlive 
shell$ yum install texlive-tex4ht 
shell$ yum install texlive-tocloft 
shell$ yum install texlive-framed 
shell$ yum install texlive-preprint 
shell$ yum install ImageMagick 
 
# Python packages: 
shell$ easy_install pyro   # (3.16) 
shell$ easy_install Jinja2 # (2.6) 
shell$ easy_install pygraphviz 
 
# (sqlite 3.7.13 already installed on the system)

If you do not have root access on your intended cylc host machine and cannot get a sysadmin to do this at system level, see Section 4.4 for tips on installing everything to a local user account.

Now check that everything other than the LATEXpackages is installed properly:

 
shell$ cylc check-software 
Checking for Python >= 2.5 ... found 2.7.3 ... ok 
Checking for non-Python packages: 
 + Graphviz ... ok 
 + sqlite ... ok 
Checking for Python packages: 
 + Pyro-3 ... ok 
 + Jinja2 ... ok 
 + pygraphviz ... ok 
 + pygtk ... ok

If this command reports any errors then the packages concerned are not installed, not in the system Python search path, or (for a local install) not present in your $PYTHONPATH variable.

4.2 Install Cylc

Cylc installs into a normal user account, as an unpacked release tarball or a git repository clone. See the INSTALL file in the source tree for instructions (also listed in Section G).

4.2.1 Create A Site Config File

Site and user config files define some important parameters that affect all suites, some of which may need to be customized for your site. Section 6 describes how to generate an initial site file and where to install it. All legal site and user config items are defined Appendix B.

4.3 Automated Tests

Cylc has a battery of self-diagnosing tests, invoked by the command cylc test-battery. These are primarily intended to check that new developments don’t break existing functionality, but you can also run them after installation to check that everything works properly. See cylc test-battery --help before running the tests.

4.4 Local User Installation

It is possible to install cylc and all of its software prerequisites under your own user account. Cylc itself is already designed to be installed into a normal user account, just follow the instructions above in Section 4.2. For the other packages, depending on the installation method used for each, it is just a matter of learning how to change the default install path prefix from, for example, /usr/local to $HOME/installed/usr/local and then ensuring that the resulting local package paths are set properly in your PYTHONPATH environment variable.

4.4.1 Some Guidelines

Finally, check that everything (other than LATEXfor document processing) is installed:

 
shell$ cylc check-software 
Checking for Python >= 2.5 ... found 2.7.3 ... ok 
Checking for non-Python packages: 
 + Graphviz ... ok 
 + sqlite ... ok 
Checking for Python packages: 
 + Pyro-3 ... ok 
 + Jinja2 ... ok 
 + pygraphviz ... ok 
 + pygtk ... ok

If this command reports any errors then the packages concerned are not installed, not in the system Python search path, or (for a local install) not present in your $PYTHONPATH variable.

4.5 Upgrading To New Cylc Versions

Upgrading is just a matter of unpacking the new cylc release. Successive cylc releases can be installed in parallel as suggested in the INSTALL file (Section G).

5 On The Meaning Of Cycle Time In Cylc

You may be accustomed to the idea that a forecasting suite has a “current cycle time”, which is typically the analysis time or nominal start time of the main forecast model(s) in the suite, and that the whole suite advances to the next forecast cycle when all tasks in the current cycle have finished (or even when a particular wall clock time is reached, in real time operation). As explained in the Introduction, this is not how cylc works.

Cylc suites advance by means of individual tasks with private cycle times independently spawning successors at the next valid cycle time for the task, not by incrementing a suite-wide forecast cycle. Each task will be submitted when its own prerequisites are satisfied, regardless of other tasks with other cycle times running, or not, at the time. It may still be convenient at times, however, to refer to the “current cycle”, the “previous cycle”, or the “next cycle” and so forth, with reference to a particular task, or in the sense of all tasks that “belong to” a particular forecast cycle. But keep in mind that the members of these groups may not be present simultaneously in the running suite - i.e. different tasks may pass through the “current cycle” (etc.) at different times as the suite evolves, particularly in delayed (catch up) operation.

6 Site And User Configuration Files

Cylc site and user configuration files contain settings that affect all suites. Some of these, such as the range of network ports used by cylc, should be set at site level,

 
# cylc site config file 
/path/to/cylc/conf/siterc/site.rc

Others, such as the preferred text editor for suite definitions, can be overridden by users,

 
# cylc user config file 
$HOME/.cylc/user.rc

The cylc get-global-config command retrieves current global settings consisting of cylc defaults overridden by site settings, if any, overridden by user settings, if any. To generate an initial site or user config file:

 
shell$ cylc get-global-config --print > $HOME/.cylc/user.rc

Settings that do not need to be changed should be deleted or commented out of user config files so that they don’t override future changes to the site file.

Legal items, values, and system defaults are documented in the Site And User Config File Reference, Section B.

7 Tutorial

 7.1 User Config File
 7.2 User Interfaces
 7.3 Suite Definitions
 7.4 Suite Name Registration
 7.5 Suite Passphrases
 7.6 Import The Example Suites
 7.7 Rename The Imported Tutorial Suites
 7.8 Suite Validation
 7.9 Hello World in Cylc
 7.10 Editing Suites
 7.11 Running Suites
 7.12 Discovering Running Suites
 7.13 Task Identifiers
 7.14 Job Submission: How Tasks Are Executed
 7.15 Locating Suite And Task Output
 7.16 Remote Tasks
 7.17 Task Triggering
 7.18 Runtime Inheritance
 7.19 Triggering Families
 7.20 Triggering Off Families
 7.21 Suite Visualization
 7.22 External Task Scripts
 7.23 Cycling Tasks
 7.24 Jinja2
 7.25 Task Retry On Failure
 7.26 Other Users’ Suites
 7.27 Searching A Suite
 7.28 Other Things To Try

This section provides a hands-on tutorial introduction to basic cylc suite preparation and control. A number of features are not yet touched on by the tutorial examples, however, so please also read the rest of the User Guide.

7.1 User Config File

Some global parameters affecting cylc’s behaviour are defined in a site config file, and can be customized per user in user config files. For example, to choose the text editor invoked by cylc on suite definitions:

 
# $HOME/.cylc/user.rc 
[editors] 
    terminal = vim 
    gui = gvim -f

7.2 User Interfaces

Cylc has command line (CLI) and graphical (GUI) user interfaces. To get access to them you just need the cylc bin directory in your shell search path:

 
export PATH=/path/to/cylc/bin:$PATH

7.2.1 Command Line (CLI)

The command line interface is unified under a single top level cylc command that provides access to many sub-commands and their help documentation.

 
shell$ cylc help       # top level command help 
shell$ cylc run --help # example command-specific help

7.2.2 Graphical (GUI)

The cylc GUI covers the same functionality as the CLI with the addition of live suite monitoring capability, and it is intended to be easier to use without expert knowledge. It can start and stop suites, or connect to suites that are already running; in either case, shutting down the GUI does not have affect the suite itself.

 
shell$ gcylc & # or: 
shell$ cylc gui & 
shell$ cylc gsummary & # summary GUI for multiple running suites

Clicking on a suite in the summary GUI, shown in Figure 15, opens a gcylc instance for it.

7.3 Suite Definitions

Cylc suites are defined by extended-INI format suite.rc files (the main file format extension is section nesting). These reside in suite definition directories that may also contain a bin directory and any other suite-related files.

7.4 Suite Name Registration

Suite registration associates a name with a suite definition directory, in a simple database. Cylc commands that parse suite definition files can take the file path or the suite name as input; commands that interact with running suites have to target the suite by name.

 
# target a suite by file path: 
shell$ cylc validate /path/to/my/suite/suite.rc 
shell$ cylc graph /path/to/my/suite/suite.rc 
# register a name for a suite: 
shell$ cylc register my.suite /path/to/my/suite/ 
# target a suite by name: 
shell$ cylc graph my.suite 
shell$ cylc validate my.suite 
shell$ cylc run my.suite 
shell$ cylc stop my.suite 
# etc.

7.5 Suite Passphrases

At registration time a random string of characters is written to a file called passphrase in the suite definition directory. At run time any contact from cylc client programs (running tasks, user commands, the cylc GUI) must use the same passphrase to authenticate with the running suite. This prevents unauthorized users interfering in your suites (network communication between running processes is not subject to Unix user account permissions). Local tasks and user commands on the suite host automatically use the passphrase in the suite definition directory. For remote tasks and commands, however, the passphrase must be installed appropriately on the remote account - see Section 7.16 below.

7.6 Import The Example Suites

Run the following command to import cylc’s example suites to a chosen directory location and register them for use under the examples name group:

 
shell$ cylc import-examples $TMPDIR examples

(first check that $TMPDIR is defined in your environment, or else use a different location). List the newly registered tutorial suites using the cylc print command:

 
shell$ cylc db print examples.tutorial -y 
examples.tutorial.oneoff.jinja2   /tmp/examples/tutorial/oneoff/jinja2 
examples.tutorial.cycling.two     /tmp/examples/tutorial/cycling/two 
examples.tutorial.cycling.three   /tmp/examples/tutorial/cycling/three 
examples.tutorial.oneoff.remote   /tmp/examples/tutorial/oneoff/remote 
# ...

See cylc db print --help for other display options. The tree-form display shows how hierarchical suite names can be used to organize related suites nicely (suite names do not have to be related to their source directory paths, although they are in this case):

 
shell$ cylc db pr --tree -x examples.tutorial 
examples 
 ‘-tutorial 
   ‘-cycling 
   | |-four       Inter-cycle dependence + a start-up task 
   | | ... 
   | |-two        Two cycling tasks with inter-cycle dependence 
   | ‘-three      Intercycle dependence + an asynchronous task 
   ‘-oneoff 
     |-retry      A task with automatic retry on failure 
     |-remote     Hello World! on a remote host 
     | ... 
     |-basic      The cylc Hello World! suite 
     ‘-jobsub     Hello World! by 'at' job submission

7.7 Rename The Imported Tutorial Suites

 Rename (re-register) the tutorial suites to make their names a bit shorter:

 
$ cylc rereg examples.tutorial tut 
REREGISTER examples.tutorial.oneoff.jinja2 to tut.oneoff.jinja2 
#... 
shell$ cylc db print -x tutorial 
tut.oneoff.external   Hello World! from an external task script 
# ...

7.8 Suite Validation

Suite definitions can be validated against the suite.rc file format specification to detect many types of error without running the suite.

 
# pass: 
shell$ cylc validate tut.oneoff.basic 
Suite tut.oneoff.basic is valid for cylc-5.3.0 
shell$ echo $? 
0 
# fail: 
shell$ cylc validate my.bad.suite 
'Illegal item: [scheduling]special tusks' 
shell$ echo $? 
1

7.9 Hello World in Cylc

suite: tut.oneoff.basic

Here’s the traditional Hello World program rendered as a cylc suite:

 
title = "The cylc Hello World! suite" 
[scheduling] 
    [[dependencies]] 
        graph = "hello" 
[runtime] 
    [[hello]] 
        command scripting = "sleep 10; echo Hello World!"
Cylc suites feature a clean separation of scheduling configuration, which determines when tasks are ready to run; and runtime configuration, which determines what to run (and where and how to run it) when a task is ready. In this example the [scheduling] section defines a single task called hello that triggers immediately when the suite starts up. When the task finishes the suite shuts down. That this is a dependency graph will be more obvious when more tasks are added. Under the [runtime] section the command scripting item defines a simple inlined implementation for hello: it sleeps for ten seconds, then prints Hello World!, and exits. This ends up in a job script generated by cylc to encapsulate the task (below) and, thanks to some some defaults designed to allow quick prototyping of new suites, it is submitted to run as a background job on the suite host. In fact cylc even provides a default task implementation that makes the entire [runtime] section technically optional:
 
title = "The minimal complete runnable cylc suite" 
[scheduling] 
    [[dependencies]] 
        graph = "foo" 
# (actually, 'title' is optional ... and so is this comment)
(the resulting dummy task just prints out some identifying information and exits).

7.10 Editing Suites

The text editor invoked by cylc on suite definitions is determined by cylc site and user config files, as shown above in Section 7.2. Check that you have renamed the tutorial examples suites as described just above and open the Hello World suite definition in your text editor:

 
shell$ cylc edit tut.oneoff.basic # in-terminal 
shell$ cylc edit -g tut.oneoff.basic & # or GUI

Alternatively, start gcylc on the suite,

 
shell$ gcylc tut.oneoff.basic &

and choose Suite Edit from the menu.

The editor will be invoked from the suite definition directory for easy access to other suite files (in this case there are none). There are syntax highlighting control files for several text editors under /path/to/cylc/conf/; see in-file comments for installation instructions.

7.11 Running Suites

7.11.1 CLI

Run the suite at the terminal with the cylc run command:

 
shell$ cylc run --no-detach tut.oneoff.basic 
⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆ 
             The Cylc Suite Engine [5.3.0-436-ge9ba-dirty]              
              Copyright (C) 2008-2013 Hilary Oliver, NIWA               
                                                                        
 This program comes with ABSOLUTELY NO WARRANTY.  It is free software;  
 you are welcome to redistribute it under certain conditions. Details:  
           cylc license conditions'; cylc license warranty'           
⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆⋆ 
 
1 task ready 
   hello.1 submitting now 
SUBMIT No.1(1,1): 
    (/home/hilary/cylc-run/tut.oneoff.basic/log/job/hello.1.1 </dev/null 
    1>/home/hilary/cylc-run/tut.oneoff.basic/log/job/hello.1.1.out 
    2>/home/hilary/cylc-run/tut.oneoff.basic/log/job/hello.1.1.err & 
    echo $!; wait ) 
   hello.1 submission succeeded 
   hello.1 submit_method_id=28480 
   hello.1 started at 2013-09-20T13:51:34 
   hello.1 succeeded at 2013-09-20T13:51:46 
 
Initiating suite shutdown 
DONE

The --no-detach option tells cylc not to daemonize so that output is printed to the terminal. When the task is ready to run cylc generates a special job script to run it. The command line used to submit the job script, which depends on the task’s job submission method and host machine, is printed to suite stdout. Messages subsequently received from the running task are also printed. More detailed information is written, time-stamped, to a suite log. The suite automatically shuts down when and if all tasks have succeeded.

7.11.2 GUI

The cylc GUI can start and stop suites, or (re)connect to suites that are already running: gcylc

 
shell$ cylc gui tut.oneoff.basic &

use the tool bar Play button, or the Control Run menu item, to run the suite again. You may want to alter the suite definition slightly to make the task take longer to run. Try right-clicking on the hello task to view its output logs. The relative merits of the three suite views - dot, tree, and graph - will be more apparent later when we have more tasks. Closing the GUI does not affect the suite itself.

7.12 Discovering Running Suites

Suites that are currently running can be detected with command line or GUI tools:

 
# list currently running suites and their port numbers: 
shell$ cylc scan 
# GUI summary view of running suites: 
shell$ cylc gsummary &


PIC


Figure 15: The cylc gsummary GUI


7.13 Task Identifiers

At run time, task instances are identified by name, which is determined entirely by the suite definition, and a cycle time or integer tag:

 
foo.2010080800   # a cycling task 
bar.1            # a non-cycling task

Non-cycling tasks usually just have the tag 1, but this still has to be used to target the task instance with cylc commands.

7.14 Job Submission: How Tasks Are Executed

suite: tut.oneoff.jobsub

Task job scripts are generated by cylc to wrap the task implementation specified in the suite definition (environment, command scripting, etc.) in error trapping code and cylc messaging calls to report task progress back to the suite. Job scripts are saved to the suite run directory - the location can be seen in the job submission commands printed to suite stdout. They can be viewed by right-clicking on the task in the cylc GUI, or printed to the terminal:

 
shell$ cylc log tut.oneoff.basic hello.1

Or a new job script can be generated on the fly for inspection,

 
shell$ cylc jobscript tut.oneoff.basic hello.1

Take a look at the job script generated for hello.1 during the suite run above. The command scripting should be clearly visible toward the bottom of the file.

The hello task in the first tutorial suite defaults to running as a background job on the suite host. To submit it to the Unix at scheduler instead, configure its job submission settings as in tut.oneoff.jobsub:

 
[runtime] 
    [[hello]] 
        command scripting = "sleep 10; echo Hello World!" 
        [[[job submission]]] 
            method = at

If you run the suite (first check that the at daemon atd is running on the suite host) a different, at-specific job submission command will be used and printed to stdout:

 
shell$ cylc run --no-detach tut.oneoff.jobsub 
#... 
1 task ready 
   hello.1 submitting now 
SUBMIT No.1(1,1): 
    (echo "/home/hilary/cylc-run/tut.oneoff.jobsub/hello.1.1 \ 
     1>/home/hilary/cylc-run/tut.oneoff.jobsub/hello.1.1.out \ 
     2>/home/hilary/cylc-run/tut.oneoff.jobsub/hello.1.1.err" | at now) 
   hello.1 submission succeeded 
   hello.1 started at 2013-09-28T17:25:07 
   hello.1 submit_method_id=2191 
   hello.1 succeeded at 2013-09-28T17:25:17

Cylc supports a number of different job submission methods. Tasks submitted to external batch queuing systems like at, PBS, SLURM, or loadleveler, will be displayed as submitted in cylc until they actually start executing.

7.15 Locating Suite And Task Output

If the --no-detach option is not used, suite stdout and stderr will be directed to the suite run directory along with the time-stamped suite log file, and task job scripts and job logs (task stdout and stderr). The default suite run directory location is $HOME/cylc-run:

 
shell$ tree $HOME/cylc-run/tut.oneoff.basic/ 
|-- cylc-suite.db       # suite run database 
|-- cylc-suite-env      # suite environment file 
|-- log                 # suite log files 
|   |-- job                 # task job logs 
|   |   |-- hello.1.1           # task job script 
|   |   |-- hello.1.1.err       # task stderr log 
|   |   |-- hello.1.1.out       # task stdout log 
|   |   ‘-- hello.1.1.status    # task status file 
|   ‘-- suite               # suite server logs 
|       |-- err                 # suite stderr log 
|       |-- log                 # suite event log 
|       ‘-- out                 # suite stdout log 
|-- state               # suite state dump files 
|   |-- state 
|   |-- state-1 
|   ‘-- state-2 
|-- share               # suite share directory 
‘-- work                # suite work directory

The suite run database, suite environment file, suite state files, and task status files are used internally by cylc. Tasks execute in sub-directories of work/, which are automatically deleted if empty when the task finishes. The suite share/ directory is made available to all tasks (by $CYLC_SUITE_SHARE_DIR) as a common share space. Job log filenames have the task try number appended (here just 1) - this increments from 1 if a task is configured to retry on failure, to avoid overwriting the logs from previous tries.

The top level run directory location can be changed in site and user config files if necessary, and the suite share and work locations can be configured separately because of the potentially larger disk space requirement.

Task job logs can be viewed by right-clicking on tasks in the gcylc GUI (so long as the task proxy is live in the suite), manually accessed from the log directory (of course), or printed to the terminal with the cylc log command:

 
# suite logs: 
shell$ cylc log    tut.oneoff.basic           # suite event log 
shell$ cylc log -o tut.oneoff.basic           # suite stdout log 
shell$ cylc log -e tut.oneoff.basic           # suite stderr log 
# task logs: 
shell$ cylc log    tut.oneoff.basic hello.1   # task job script 
shell$ cylc log -o tut.oneoff.basic hello.1   # task stdout log 
shell$ cylc log -e tut.oneoff.basic hello.1   # task stderr log

For a more sophisticated web-based interface to suite and task logs, see Rose in Section 14.

7.16 Remote Tasks

suite: tut.oneoff.remote

The hello task in the first two tutorial suites defaults to running on the suite host. To make it run on a remote host instead change its runtime configuration as in tut.oneoff.remote:

 
[runtime] 
    [[hello]] 
        command scripting = "sleep 10; echo Hello World!" 
        [[[remote]]] 
            host = server1.niwa.co.nz

For remote task hosting to work several requirements must be satisfied:

If your username is different on the task host the [[[remote]]] section also supports an owner=username item, or your $HOME/.ssh/config file can be configured for username translation.

If you configure a task host according to the requirements above and run the suite again you’ll see that the job submission command printed to suite stdout is now considerably more complicated. That’s because it has to create remote log directories, source login scripts to ensure cylc is visible on the remote host, pipe the task job script over, and submit it to run there by the configured job submission method:

 
shell$ cylc run --no-detach tut.oneoff.remote 
# ... 
1 task ready 
   hello.1 submitting now 
SUBMIT No.1(1,1): ssh -oBatchMode=yes server1.niwa.co.nz \ 
    "test -f /etc/profile && . /etc/profile 1>/dev/null 2>&1; \ 
     test -f $HOME/.profile && . $HOME/.profile 1>/dev/null 2>&1; \ 
     mkdir -p $(dirname $HOME/cylc-run/tut.oneoff.remote/log/job/hello.1.1) \ 
     && cat >$HOME/cylc-run/tut.oneoff.remote/log/job/hello.1.1 && \ 
     chmod +x $HOME/cylc-run/tut.oneoff.remote/log/job/hello.1.1 && \ 
     (($HOME/cylc-run/tut.oneoff.remote/log/job/hello.1.1 </dev/null \ 
     1>$HOME/cylc-run/tut.oneoff.remote/log/job/hello.1.1.out \ 
     2>$HOME/cylc-run/tut.oneoff.remote/log/job/hello.1.1.err) & echo $!; wait )" \ 
     < /home/hilary/cylc-run/tut.oneoff.remote/log/job/hello.1.1 
   hello.1 submission succeeded 
   hello.1 started at 2013-09-28T17:45:13 
   hello.1 submit_method_id=27256 
   hello.1 succeeded at 2013-09-28T17:45:23

Remote task job logs are saved to the suite run directory on the task host, not on the suite host, although they can be retrieved by right-clicking on the task in the GUI. Rose (section 14.1) provides a task event handler to pull logs back to the suite host.

7.17 Task Triggering

suite: tut.oneoff.goodbye

To make a second task called goodbye trigger after hello finishes successfully, return to the original example, tut.oneoff.basic, and change the suite graph as in tut.oneoff.goodbye:

 
[scheduling] 
    [[dependencies]] 
        graph = "hello => goodbye"

or to trigger it at the same time as hello,

 
[scheduling] 
    [[dependencies]] 
        graph = "hello & goodbye"

and configure the new task’s behaviour under [runtime]:

 
[runtime] 
    [[goodbye]] 
        command scripting = "sleep 10; echo Goodbye World!"

Run tut.oneoff.goodbye and check the output from the new task:

 
shell$ cat ~/cylc-run/tut.oneoff.goodbye/log/job/goodbye.1.1.out 
# (or cylc log -o tut.oneoff.goodbye goodbye.1 
JOB SCRIPT STARTING 
cylc (scheduler - 2013/09/23 18:15:09): goodbye.1 started at 2013-09-23T18:15:09 
Send message: try 1 of 7 succeeded 
cylc Suite and Task Identity: 
  Suite Name  : tut.oneoff.goodbye 
  Suite Host  : server0 
  Suite Port  : 7766 
  Suite Owner : hilary 
  Task ID     : goodbye.1 
  Task Host   : server0 
  Task Owner  : hilary 
  Task Try No.: 1 
 
Goodbye World! 
 
cylc (scheduler - 2013/09/23 18:15:20): hello.1 succeeded at 2013-09-23T18:15:20 
Send message: try 1 of 7 succeeded 
JOB SCRIPT EXITING (TASK SUCCEEDED)

7.17.1 Failure And Suicide Triggering
suite: tut.oneoff.suicide

Task names in the graph string can be qualified with a state indicator to trigger off task states other than success:

 
    graph = """ 
 a => b        # trigger b if a succeeds 
 c:submit => d # trigger d if c submits 
 e:finish => f # trigger f if e succeeds or fails 
 g:start  => h # trigger h if g starts executing 
 i:fail   => j # trigger j if i fails 
            """

A common use of this is to automate recovery from known modes of failure:

 
    graph = "goodbye:fail => really_goodbye"

i.e. if task goodbye fails, trigger another task that (presumably) really says goodbye.

Failure triggering generally requires use of suicide triggers as well, to remove the recovery task if it isn’t required (otherwise it would hang about indefinitely in the waiting state):

 
[scheduling] 
    [[dependencies]] 
        graph = """hello => goodbye 
            goodbye:fail => really_goodbye 
         goodbye => !really_goodbye # suicide"""

This means if goodbye fails, trigger really_goodbye; and otherwise, if goodbye succeeds, remove really_goodbye from the suite.

Try running tut.oneoff.suicide, which also configures the hello task’s runtime to make it fail, to see how this works.

7.18 Runtime Inheritance

suite: tut.oneoff.inherit

The [runtime] section is actually a multiple inheritance hierarchy. Each subsection is a namespace that represents a task, or if it inherits from other namespaces, a family. This allows common configuration to be factored out of related tasks very efficiently.

 
title = "Simple runtime inheritance example" 
[scheduling] 
    [[dependencies]] 
        graph = "hello => goodbye" 
[runtime] 
    [[root]] 
        command scripting = "sleep 10; echo $GREETING World!" 
    [[hello]] 
        [[[environment]]] 
            GREETING = Hello 
    [[goodbye]] 
        [[[environment]]] 
            GREETING = Goodbye
The [root] namespace is at the root of all runtime hierarchies. It provides defaults for all tasks in the suite. Here both tasks inherit command scripting from root, which they customize with different values of the environment variable $GREETING. Note that inheritance from root is implicit; from other parents an explicit inherit = PARENT is required, as shown below.

7.19 Triggering Families

suite: tut.oneoff.ftrigger1

Task families defined by runtime inheritance can also be used as shorthand in graph trigger expressions. To see this, consider two “greeter” tasks that trigger off another task foo,

 
[scheduling] 
    [[dependencies]] 
        graph = "foo => greeter_1 & greeter_2"

If we put the common greeting functionality of greeter_1 and greeter_2 into a special GREETERS family, the graph can be expressed more efficiently like this:

 
[scheduling] 
    [[dependencies]] 
        graph = "foo => GREETERS"

i.e. if foo succeeds, trigger all members of GREETERS at once. Here’s the full suite with runtime hierarchy shown:

 
title = "Triggering a family of tasks" 
[scheduling] 
    [[dependencies]] 
        graph = "foo => GREETERS" 
[runtime] 
    [[root]] 
        pre-command scripting = "sleep 10" 
    [[foo]] 
        # empty (creates a dummy task) 
    [[GREETERS]] 
        command scripting = "echo $GREETING World!" 
    [[greeter_1]] 
        inherit = GREETERS 
        [[[environment]]] 
            GREETING = Hello 
    [[greeter_2]] 
        inherit = GREETERS 
        [[[environment]]] 
            GREETING = Goodbye
Verbose validation shows the family member substitution done when the suite definition is parsed:
 
shell$ cylc val -v tut.oneoff.ftr1gger1 
... 
Graph line substitutions occurred: 
  IN: foo => GREETERS 
  OUT: foo => greeter_1 & greeter_2 
...

Experiment with the tut.oneoff.ftrigger1 suite to see how this works.

7.20 Triggering Off Families

suite: tut.oneoff.ftrigger2

Tasks (or families) can also trigger off other families, but in this case we need to specify what the trigger means in terms of the upstream family members. Here’s how to trigger another task bar if all members of GREETERS succeed:

 
[scheduling] 
    [[dependencies]] 
        graph = """foo => GREETERS 
            GREETERS:succeed-all => bar"""

Verbose validation in this case reports:

 
shell$ cylc val -v tut.oneoff.ftrigger2 
... 
Graph line substitutions occurred: 
  IN: GREETERS:succeed-all => bar 
  OUT: greeter_1:succeed & greeter_2:succeed => bar 
...

Cylc ignores family member qualifiers like succeed-all on the right side of a trigger arrow, where they don’t make sense, to allow the two graph lines above to be combined in simple cases:

 
[scheduling] 
    [[dependencies]] 
        graph = "foo => GREETERS:succeed-all => bar"

Any task triggering status qualified by -all or -any, for the members, can be used with a family trigger. For example, here’s how to trigger bar if all members of GREETERS finish (succeed or fail) and any of them them succeed:

 
[scheduling] 
    [[dependencies]] 
        graph = """foo => GREETERS 
    GREETERS:finish-all & GREETERS:succeed-any => bar"""

(use of GREETERS:succeed-any by itself here would trigger bar as soon as any one member of GREETERS completed successfully). Verbose validation now begins to show how family triggers can simplify complex graphs, even for this tiny two-member family:

 
shell$ cylc val -v tut.oneoff.ftrigger2 
... 
Graph line substitutions occurred: 
  IN: GREETERS:finish-all & GREETERS:succeed-any => bar 
  OUT: ( greeter_1:succeed | greeter_1:fail ) & \ 
       ( greeter_2:succeed | greeter_2:fail ) & \ 
       ( greeter_1:succeed | greeter_2:succeed ) => bar 
...

Experiment with tut.oneoff.ftrigger2 to see how this works.

7.21 Suite Visualization

You can style dependency graphs with an optional [visualization] section, as shown in tut.oneoff.ftrigger2:

 
[visualization] 
    default node attributes = "style=filled" 
    [[node attributes]] 
        foo = "fillcolor=#6789ab", "color=magenta" 
        GREETERS = "fillcolor=#ba9876" 
        bar = "fillcolor=#89ab67"

To display the graph in an interactive viewer,

 
shell$ cylc graph tut.oneoff.ftrigger2 &    # dependency graph 
shell$ cylc graph -n tut.oneoff.ftrigger2 & # runtime inheritance graph

It should look like Figure 16 (with the GREETERS family node expanded on the right).


PIC       PIC       PIC


Figure 16: The tut.oneoff.ftrigger2 dependency and runtime inheritance graphs


Graph styling can be applied to entire families at once, and custom “node groups” can also be defined for non-family groups.

7.22 External Task Scripts

suite: tut.oneoff.external

The tasks in our examples so far have all had inlined implementation, in the suite definition, but real tasks often need to call external commands, scripts, or executables. To try this, let’s return to the basic Hello World suite and cut the implementation of the task hello out to a file hello.sh in the suite bin directory:

 
#!/bin/sh 
 
set -e 
 
GREETING=${GREETING:-Goodbye} 
echo "$GREETING World! from $0"
Make the task script executable, and change the hello task runtime section to invoke it:
 
title = "Hello World! from an external task script" 
[scheduling] 
    [[dependencies]] 
        graph = "hello" 
[runtime] 
    [[hello]] 
        pre-command scripting = sleep 10 
        command scripting = hello.sh 
        [[[environment]]] 
            GREETING = Hello
If you run the suite now the new greeting from the external task script should appear in the hello task stdout log. This works because cylc automatically adds the suite bin directory to $PATH in the environment passed to tasks via their job scripts. To execute scripts (etc.) located elsewhere you can refer to the file by its full file path, or set $PATH appropriately yourself (this could be done via $HOME/.profile, which is sourced at the top of the task job script, or in the suite definition itself).

Note the use of set -e above to make the script abort on error. This allows the error trapping code in the task job script to automatically detect unforeseen errors.

7.23 Cycling Tasks

suite: tut.cycling.one

So far we’ve considered non-cycling tasks, which finish without spawning a successor. Cycling tasks have an associated cycle time, and they spawn a successor at their next cycle time as soon as they are submitted to run (so that successive instances of a task can run in parallel if the opportunity arises and their dependencies allow it and).

Open the tut.cycling.one suite:

 
title = "Two cycling tasks, no inter-cycle dependence" 
[scheduling] 
    initial cycle time = 2013080800 
    final cycle time = 2013081200 
    [[dependencies]] 
        [[[0,12]]] # 00 and 12 hours every day 
            graph = "foo => bar" 
[visualization] 
    initial cycle time = 2013080800 
    final cycle time = 2013080900 
    [[node attributes]] 
        foo = "color=red" 
        bar = "color=blue"
The difference between cycling and non-cycling suites is all in the [scheduling] section, so we will leave the [runtime] section alone for now (this will result in cycling dummy tasks). Note that the graph is now defined under an Hours Of The Day cycling section - each task in the graph section will have a succession of cycle times of ending in 00 or 12 hours, between specified initial and final cycle times (or indefinitely, if no final cycle time is given), as shown in Figure 17.


PIC


Figure 17: The tut.cycling.one suite


If you run this suite instances of foo will spawn in parallel out to the suite runahead limit, and each bar will trigger off the corresponding instance of foo at the same cycle time. The runahead limit prevents uncontrolled spawning of cycling tasks in suites that are not constrained by clock triggers in real time operation. The default limit is twice the shortest cycling interval in the suite. Cycling tasks can be declared sequential to prevent successive instances running in parallel, if necessary (Section 9.3.5).

Experiment with tut.cycling.one to see how cycling tasks work.

7.23.1 Inter-Cycle Triggers
suite: tut.cycling.two

The tut.cycling.two suite adds inter-cycle dependence to the previous example:

 
[scheduling] 
    [[dependencies]] 
        [[[0,12]]] 
            graph = "foo[T-12] => foo => bar"

For any given cycle time T in the sequence defined by the cycling graph section heading, bar triggers off foo as before, but now foo triggers off its own previous instance foo[T-12]. Figure 18 shows how this connects the cycling graph sections together.


PIC


Figure 18: The tut.cycling.two suite


Experiment with this suite to see how inter-cycle triggers work. Note that the first instance of foo, at suite start-up, will trigger immediately in spite of its inter-cycle trigger, because cylc ignores triggers that reach back beyond the initial cycle time.

The presence of an inter-cycle trigger usually implies something special has to happen at start-up, however. If a model depends on its own previous instance for restart files, for instance, then some special process will typically have to generate the initial set of restart files when there is no previous cycle to do it. The following sections illustrate several ways of handling this in cylc suites.

7.23.2 Initial Asynchronous Tasks
suite: tut.cycling.three

Asynchronous tasks are non-cycling tasks with no associated cycle time, as in tut.cycling.three:

 
[scheduling] 
    [[dependencies]] 
        graph = "prep" 
        [[[0,12]]] 
            graph = "prep & foo[T-12] => foo => bar"

This is shown in on the left of Figure 19.

Initially foo[T-12] will be ignored because its cycle time is earlier than the suite’s initial cycle time. In subsequent cycles dependence on the asynchronous task will be ignored and foo will trigger off its previous instance.

7.23.3 Initial Start-up Tasks
suite: tut.cycling.four

An alternative to an asynchronous task is a start-up task, which is a non-cycling task that nevertheless has an associated cycle time, as in tut.cycling.four:

 
[scheduling] 
    [[special tasks]] 
        start-up = prep 
    [[dependencies]] 
        [[[0,12]]] 
            graph = "prep & foo[T-12] => foo => bar"

This is shown in the right of Figure 19. Initially foo[T-12] will be ignored because its cycle time is earlier than the suite’s initial cycle time. In subsequent cycles dependence on the start-up task will be ignored and foo will trigger off its previous instance.


PIC       PIC


Figure 19: The tut.cycling.three and tut.cycling.four suites


7.23.4 Initial Cold-start Tasks
suite: tut.cycling.five

Special one-off cold-start tasks provide another way to handle inter-cycle dependence at start-up, illustrated by tut.cycling.five.

 
[scheduling] 
    [[special tasks]] 
        cold-start = cfoo 
    [[dependencies]] 
        [[[0,12]]] 
            graph = "cfoo | foo[T-6] => foo => bar"

For any given cycle time a warm-cycled model can in principle trigger off a previous instance of itself or off a special cold start process that generates the same result, technically, in terms of restart files for the model. Cold-start tasks in cylc are intended to closely mirror this real process. Cylc somewhat arbitrarily assigns the cold-start task the same cycle time as the associated model, but a cycle time offset can be computed by the task itself if necessary.

The conditional OR trigger means this does not actually rely on cylc ignoring triggers that reach back beyond the initial cycle time. It also means dependence on the cold-start task can be retained in subsequent cycles without stalling the suite, and consequently cold-start tasks can be inserted later (cylc insert --help) to restart a model in-suite after a failure that requires missing one or more cycles. Conversely, because cylc now ignores pre-initial-cycle triggers, the cold-start OR construct is no longer necessary to bootstrap a suite with inter-cycle triggers into action - you can use the arguably simpler start-up tasks as described above.

Real suites may need a number of inter-dependent cold-start, start-up, or asynchronous tasks at start-up.

7.24 Jinja2

Cylc has built in support for the Jinja2 template processor, which allows us to embed code in suite definitions to generate the final result seen by cylc.

The tut.oneoff.jinja2 suite illustrates two common uses of Jinja2: changing suite content or structure based on the value of a logical switch; and iteratively generating dependencies and runtime configuration for groups of related tasks:

 
#!jinja2 
 
{% set MULTI = True %} 
{% set N_GOODBYES = 3 %} 
 
title = "A Jinja2 Hello World! suite" 
[scheduling] 
    [[dependencies]] 
{% if MULTI %} 
        graph = "hello => BYE" 
{% else %} 
        graph = "hello" 
{% endif %} 
 
[runtime] 
    [[hello]] 
        command scripting = "sleep 10; echo Hello World!" 
{% if MULTI %} 
    [[BYE]] 
        command scripting = "sleep 10; echo Goodbye World!" 
    {% for I in range(0,N_GOODBYES) %} 
    [[ goodbye_{{I}} ]] 
        inherit = BYE 
    {% endfor %} 
{% endif %}
To view the result of Jinja2 processing with the Jinja2 flag MULTI set to False:
 
shell$ cylc view --jinja2 --stdout tut.oneoff.jinja2
 
title = "A Jinja2 Hello World! suite" 
[scheduling] 
    [[dependencies]] 
        graph = "hello" 
[runtime] 
    [[hello]] 
        command scripting = "sleep 10; echo Hello World!"

And with MULTI set to True:

 
shell$ cylc view --jinja2 --stdout tut.oneoff.jinja2
 
title = "A Jinja2 Hello World! suite" 
[scheduling] 
    [[dependencies]] 
        graph = "hello => BYE" 
[runtime] 
    [[hello]] 
        command scripting = "sleep 10; echo Hello World!" 
    [[BYE]] 
        command scripting = "sleep 10; echo Goodbye World!" 
    [[ goodbye_0 ]] 
        inherit = BYE 
    [[ goodbye_1 ]] 
        inherit = BYE 
    [[ goodbye_2 ]] 
        inherit = BYE

7.25 Task Retry On Failure

suite: tut.oneoff.retry

Tasks can be configured to retry a number of times if they fail. An environment variable $CYLC_TASK_TRY_NUBMER increments from 1 on each successive try, and is passed to the task to allow different behaviour on the retry:

 
title = "A task with automatic retry on failure" 
[scheduling] 
    [[dependencies]] 
        graph = "hello" 
[runtime] 
    [[hello]] 
        retry delays = 2⋆0.1 # retry twice after 0.1 minute delays 
        command scripting = """ 
sleep 10 
if [[ $CYLC_TASK_TRY_NUMBER < 3 ]]; then 
    echo "Hello ... aborting!" 
    exit 1 
else 
    echo "Hello World!" 
fi"""
When a task with configured retries fails, its cylc task proxy goes into the retrying state until the next retry delay is up, then it resubmits. It only enters the failed state on a final definitive failure.

Experiment with tut.oneoff.retry to see how this works.

7.26 Other Users’ Suites

If you have read access to another user’s account (even on another host) it is possible to use cylc monitor to look at their suite’s progress without full shell access to their account. To do this, you will need to copy their suite passphrase to

 
    $HOME/.cylc/SUITE_HOST/SUITE_OWNER/SUITE_NAME/passphrase

(use of the host and owner names is optional here - Section 12.5.1) and also retrieve the port number of the running suite, which can be found in their account:

 
    ~SUITE_OWNER/.cylc/ports/SUITE_NAME

Once you have this information, you can run

 
shell$ cylc monitor --user=SUITE_OWNER --port=SUITE_PORT SUITE_NAME

to view the progress of their suite.

Other suite-connecting commands work in the same way too; see Section 12.9.

7.27 Searching A Suite

The cylc suite search tool reports pattern matches in the suite definition by line number, suite section, and file, even if the suite uses nested include-files, and by file and line number for matches in suite bin scripts:

 
shell$ cylc search examples/admin/suite.rc OUTPUT_DIR 
 
FILE: /home/hilary/cylc/examples/admin/suite.rc 
   SECTION: [runtime]->[[root]]->[[[environment]]] 
      (52):             OUTPUT_DIR = $WORKSPACE 
 
FILE: /home/hilary/cylc/examples/admin/bin/A.sh 
   (7): cylc checkvars -c OUTPUT_DIR RUNNING_DIR 
   (28): touch $OUTPUT_DIR/surface-winds-${CYLC_TASK_CYCLE_TIME}.nc 
   (29): touch $OUTPUT_DIR/precipitation-${CYLC_TASK_CYCLE_TIME}.nc 
 
#...

7.28 Other Things To Try

Almost every feature of cylc can be tested quickly and easily with a simple dummy suite. You can write your own, or start from one of the example suites in /path/to/cylc/examples (see use of cylc import-examples above) - they all run “out the box” and can be copied and modified at will.

8 Suite Name Registration And Passphrases

 8.1 Database Operations
 8.2 Suite Passphrases

Cylc commands target suites via names registered in a suite name database located at $HOME/.cylc/REGDB/. Suite names are hierarchical like directory paths, allowing nested tree-like grouping, but use the ‘.’ character as a delimiter. This :

 
shell$ cylc db print -t nwp 
nwp 
 |-oper 
 | |-region1  Local Model Region1       /oper/nwp/suites/LocalModel/nested/Region1 
 | ‘-region2  Local Model Region2       /oper/nwp/suites/LocalModel/nested/Region2 
 ‘-test 
   ‘-region1  Local Model TEST Region1  /home/hilary/suites/Regional/TESTS/Region1

Suite titles held in the name database are parsed from the suite definition at the time of initial suite registration. If you change the title later use cylc db refresh to update the database.

Name groups are entirely virtual, they do not need to be explicitly created before use, and they automatically disappear if all tasks are removed from them. From the listing above, for example, to move the suite nwp.oper.region2 into the nwp.test group:

 
shell$ cylc db rereg nwp.oper.region2 nwp.test.region2 
REREGISTER nwp.oper.region2 to nwp.test.region2 
shell$ cylc db print -tx nwp 
nwp 
 |-oper 
 | ‘-region1  Local Model Region1 
 ‘-test 
   |-region1  Local Model TEST Region1 
   ‘-region2  Local Model Region2

And to move nwp.test.region2 into a new group nwp.para:

 
shell$ cylc db rereg nwp.test.region2 nwp.para.region2 
REREGISTER nwp.test.region2 to nwp.para.region2 
shell$ cylc db print -tx nwp 
nwp 
 |-oper 
 | ‘-region1  Local Model Region1 
 |-test 
 | ‘-region1  Local Model TEST Region1 
 ‘-para 
   ‘-region2  Local Model Region2

Currently you cannot explicitly indicate a group name on the command line by appending a dot character. Rather, in database operations such as copy, reregister, or unregister, the identity of the source item (group or suite) is inferred from the content of the database; and if the source item is a group, so must the target be a group (or it will be, in the case of an item that will be created by the operation). This means that you cannot copy a single suite into a group that does not exist yet unless you specify the entire target suite name.

cylc db register --help shows a number of other examples.

8.1 Database Operations

On the command line, the ‘database’ (or ‘db’) command category contains commands to implement the aforementioned operations.

 
shell$ cylc db help 
CATEGORY: db|database - Suite name registration, copying, deletion, etc. 
 
Suite name registrations are held in a simple database $HOME/.cylc/REGDB 
shell$ cat $HOME/.cylc/REGDB/my.suite 
   title=my suite title 
   path=/path/to/my/suite 
 
HELP: cylc [db|database] COMMAND help,--help 
  You can abbreviate db|database and COMMAND. 
  The category db|database may be omitted. 
 
COMMANDS: 
  copy|cp ............. Copy a suite or a group of suites 
  get-directory ....... Retrieve suite definition directory paths 
  print ............... Print registered suites 
  refresh ............. Report invalid registrations and update suite titles 
  register ............ Register a suite for use 
  reregister|rename ... Change the name of a suite 
  unregister .......... Unregister and optionally delete suites

Groups of suites (at any level in the name hierarchy) can be deleted, copied, imported, and exported; as well as individual suites. To do this, just use suite group names as source and/or target for operations, as appropriate. For instance, if a group foo.bar contains the suites foo.bar.baz and foo.bar.qux, you can copy a single suite like this:

 
shell$ cylc copy foo.bar.baz boo $HOME/suites

(resulting in a new suite boo); or the group like this:

 
shell$ cylc copy foo.bar boo $HOME/suites

(resulting in new suites boo.baz and boo.qux); or the group like this:

 
shell$ cylc copy foo boo $HOME/suites

(resulting in new suites boo.bar.baz and boo.bar.qux). When suites are copied, the suite definition directories are copied into a directory tree, under the target directory, that reflects the suite name hierarchy. cylc copy --help has some explicit examples.

The same functionality is also available by right-clicking on suites or groups in the gcylc “Open Registered Suite” dialog.

8.2 Suite Passphrases

Any client process that connects to a running suite (this includes task messaging and user-invoked interrogation and control commands) must authenticate with a secure passphrase that has been loaded by the suite. A random passphrase is generated automatically in the suite definition directory at registration time if one does not already exist there. For the default Pyro-based connection method the passphrase file must be distributed to other accounts that host running tasks or from which you need monitoring or control access to the running suite.

Alternatively, cylc can be configured to,

  1. use ssh to re-invoke task messaging commands on the suite host; or
  2. use a one-way polling mechanism for tracking task progress.

Neither of these methods require the suite passphrase to be installed on the task host. For ssh re-invocation ssh keys must be installed for the task-to-suite direction in addition to the suite-to-task setup already required for job submission. The automatic polling mechanism can be used as a last resort for hosts that do not allow routing back to the suite host for pyro or ssh. It can also be used as regular health check on submitted tasks under the other communications methods.

See Section 12 for more detail on cylc client/server communications, and how to use it.

9 Suite Definition

 9.1 Suite Definition Directories
 9.2 Suite.rc File Overview
 9.3 Scheduling - Dependency Graphs
 9.4 Runtime - Task Configuration
 9.5 Visualization
 9.6 Jinja2
 9.7 Special Placeholder Variables
 9.8 Omitting Tasks At Runtime
 9.9 Naked Dummy Tasks And Strict Validation

Cylc suites are defined in structured, validated, suite.rc files that concisely specify the properties of, and the relationships between, the various tasks managed by the suite. This section of the User Guide deals with the format and content of the suite.rc file, including task definition. Task implementation - what’s required of the real commands, scripts, or programs that do the processing that the tasks represent - is covered in Section 10; and task job submission - how tasks are submitted to run - is in Section 11.

9.1 Suite Definition Directories

A cylc suite definition directory contains:

A typical example:

 
/path/to/my/suite   # suite definition directory 
    suite.rc           # THE SUITE DEFINITION FILE 
    bin/               # scripts and executables used by tasks 
        foo.sh 
        bar.sh 
        ... 
    # (OPTIONAL) any other suite-related files, for example: 
    inc/               # suite.rc include-files 
        nwp-tasks.rc 
        globals.rc 
        ... 
    doc/               # documentation 
    control/           # control files 
    ancil/             # ancillary files 
    ...

9.2 Suite.rc File Overview

Suite.rc files are an extended-INI format with section nesting.

Embedded template processor expressions may also be used in the file, to programatically generate the final suite definition seen by cylc. Currently the Jinja2 template processor is supported (http://jinja.pocoo.org/docs); see Jinja2 (Section 9.6) for examples. In the future cylc may provide a plug-in interface to allow use of other template engines too.

9.2.1 Syntax

The following defines legal suite.rc syntax:

Suites that embed Jinja2 code (Section 9.6) must process to raw suite.rc syntax.

9.2.2 Include-Files

Cylc has native support for suite.rc include-files, which may help to organize large suites. Inclusion boundaries are completely arbitrary - you can think of include-files as chunks of the suite.rc file simply cut-and-pasted into another file. Include-files may be included multiple times in the same file, and even nested. Include-file paths can be specified portably relative to the suite definition directory, e.g.:

 
# include the file $CYLC_SUITE_DEF_PATH/inc/foo.rc: 
%include inc/foo.rc

Editing Temporarily Inlined Suites Cylc’s native file inclusion mechanism supports optional inlined editing:

 
shell$ cylc edit --inline SUITE

The suite will be split back into its constituent include-files when you exit the edit session. While editing, the inlined file becomes the official suite definition so that changes take effect whenever you save the file. See cylc prep edit --help for more information.

Include-Files via Jinja2 Jinja2 (Section 9.6) also has template inclusion functionality.

9.2.3 Syntax Highlighting For Suite Definitions

Cylc comes with a syntax file to configure suite.rc syntax highlighting and section folding in the vim editor, as shown in Figure 11. We also have an emacs font-lock mode, and syntax files for the gedit and kate editors:

 
$CYLC_DIR/conf/cylc.vim     # vim 
$CYLC_DIR/conf/cylc-mode.el # emacs 
$CYLC_DIR/conf/cylc.lang    # gedit (and other gtksourceview programs) 
$CYLC_DIR/conf/cylc.xml     # kate

Refer to comments at the top of each file to see how to use them.

9.2.4 Gross File Structure

Cylc suite.rc files consist of a suite title and description followed by configuration items grouped under several top level section headings:

9.2.5 Validation

Cylc suite.rc files are automatically validated against a specification that defines all legal entries, values, options, and defaults. This detects formatting errors, typographic errors, illegal items and illegal values prior to run time. Some values are complex strings that require further parsing by cylc to determine their correctness (this is also done during validation). All legal entries are documented in the Suite.rc Reference (Appendix A).

The validator reports the line numbers of detected errors. Here’s an example showing a section heading with a missing right bracket:

 
shell$ cylc validate my.suite 
    [[special tasks] 
'Section bracket mismatch, line 19'

If the suite.rc file uses include-files cylc view will show an inlined copy of the suite with correct line numbers (you can also edit suites in a temporarily inlined state with cylc edit --inline).

Validation does not check the validity of chosen job submission methods.

9.3 Scheduling - Dependency Graphs

The [scheduling] section of a suite.rc file defines the relationships between tasks in a suite - the information that allows cylc to determine when tasks are ready to run. The most important component of this is the suite dependency graph. Cylc graph notation makes clear textual graph representations that are very concise because sections of the graph that repeat at different hours of the day, say, only have to be defined once. Here’s an example with dependencies that vary depending on cycle time:

 
[scheduling] 
    [[dependencies]] 
        [[[0,6,12,18]]] # validity (hours of the day) 
            graph = """ 
A => B & C   # B and C trigger off A 
A[T-6] => A  # Model A restart trigger 
                    """ 
        [[[6,18]]] 
            graph = "C => X"

Figure 20 shows the complete suite.rc listing alongside the suite graph. This is a complete, valid, runnable suite (it will use default task runtime properties such as command scripting).


 
title = "Dependency Graph Example" 
[scheduling] 
    [[dependencies]] 
        [[[0,6,12,18]]] # validity (hours) 
            graph = """ 
A => B & C   # B and C trigger off A 
A[T-6] => A  # Model A restart trigger 
                    """ 
        [[[6,18]]] # hours 
            graph = "C => X" 
[visualization] 
    [[node attributes]] 
        X = "color=red"

PIC


Figure 20: Example Suite


9.3.1 Graph String Syntax

Multiline graph strings may contain:

9.3.2 Interpreting Graph Strings

Suite dependency graphs can be broken down into pairs in which the left side (which may be a single task or family, or several that are conditionally related) defines a trigger for the task or family on the right. For instance the “word graph” C triggers off B which triggers off A can be deconstructed into pairs C triggers off B and B triggers off A. In this section we use only the default trigger type, which is to trigger off the upstream task succeeding; see Section 9.3.4 for other available triggers.

In the case of cycling tasks, the triggers defined by a graph string are valid for cycle times matching the list of hours specified for the graph section. For example this graph,

 
[scheduling] 
    [[dependencies]] 
        [[[0,12]]] 
            graph = "A => B"

implies that B triggers off A for cycle times in which the hour matches 0 or 12.

To define intercycle dependencies, attach an offset indicator to the left side of a pair:

 
[scheduling] 
    [[dependencies]] 
        [[[0,12]]] 
            graph = "A[T-12] => B"

This means B[T] triggers off A[T-12] for cycle times T with hours matching 0 or 12. T must be implicit unless there is a cycle time offset - this keeps graphs clean and concise because the majority of tasks will typically depend only on others with the same cycle time. Cycle time offsets can only appear on the left of a pair, because a pairs define triggers for the right task at cycle time T. However, A => B[T-6], which is illegal, can be reformulated as a future trigger A[T+6] => B (see Section 9.3.4.10).

Triggers can be chained together. This graph:

 
    graph = """A => B  # B triggers off A 
               B => C  # C triggers off B"""

is equivalent to this:

 
    graph = "A => B => C"

Each trigger in the graph must be unique but the same task can appear in multiple pairs or chains. Separately defined triggers for the same task have an AND relationship. So this:

 
    graph = """A => X  # X triggers off A 
               B => X  # X also triggers off B"""

is equivalent to this:

 
    graph = "A & B => X"  # X triggers off A AND B

In summary, the branching tree structure of a dependency graph can be partitioned into lines (in the suite.rc graph string) of pairs or chains, in any way you like, with liberal use of internal white space and comments to make the graph structure as clear as possible.

 
# B triggers if A succeeds, then C and D trigger if B succeeds: 
    graph = "A => B => C & D" 
# which is equivalent to this: 
    graph = """A => B => C 
               B => D""" 
# and to this: 
    graph = """A => B => D 
               B => C""" 
# and to this: 
    graph = """A => B 
               B => C 
               B => D""" 
# and it can even be written like this: 
    graph = """A => B # blank line follows: 
 
               B => C # comment ... 
               B => D"""

Handling Long Graph Lines Long chains of dependencies can be split into pairs:

 
    graph = "A => B => C" 
# is equivalent to this: 
    graph = """A => B 
               B => C""" 
# BUT THIS IS AN ERROR: 
    graph = """A => B => # WRONG! 
               C"""      # WRONG!

If you have very long task names, or long conditional trigger expressions (below) then you can use the suite.rc line continuation marker:

 
    graph = "A => B \ 
    => C"  # OK

Note that a line continuation marker must be the final character on the line; it cannot be followed by trailing spaces or a comment.

9.3.3 Graph Types (VALIDITY)

A suite definition can contain multiple graph strings that are combined to generate the final graph. There are different graph VALIDITY section headings for cycling, one-off asynchronous, and repeating asynchronous tasks. Additionally, there may be multiple graph strings under different VALIDITY sections for cycling tasks with different dependencies at different cycle times.

One-off Asynchronous Tasks Figure 21 shows a small suite of one-off asynchronous tasks; these have no associated cycle time and don’t spawn successors (once they’re all finished the suite just exits). The integer 1 attached to each graph node is just an arbitrary label, akin to the task cycle time in cycling tasks; it increments when a repeating asynchronous task (below) spawns.


 
title = some one-off asynchronous tasks 
[scheduling] 
    [[dependencies]] 
        graph = "foo => bar & baz => qux"

PIC


Figure 21: One-off Asynchronous Tasks.


Cycling Tasks For cycling tasks the graph VALIDITY section heading defines a sequence of cycles times for which the subsequent graph section is valid. Figure 22 shows a small suite of cycling tasks.


 
title = some cycling tasks 
# (no dependence between cycles here) 
[scheduling] 
    [[dependencies]] 
        [[[0,12]]] 
            graph = "foo => bar & baz => qux"

PIC


Figure 22: Cycling Tasks.


Stepped Daily, Monthly, And Yearly Cycling In addition to the original hours-of-the-day section headings, cylc now has an extensible cycling mechanism and (so far) stepped daily, monthly, and yearly cycling modules:

 
[scheduling] 
    [[dependencies]] 
        [[[Daily(20100809,2)]]] 
            graph = "foo => bar" 
        [[[Monthly(201008,2)]]] 
            graph = "cat[T-2] => dog" 
        [[[Yearly(2010,2)]]] 
            graph = "apple => orange"

The section heading arguments here are an anchor datetime and an integer step. The cycle sequence always passes through the anchor regardless of the suite’s initial cycle time. So, for example, Yearly(2010,3) defines a 3-yearly sequence that always lands on the year 2010, not 2011 or 2012, regardless of the initial cycle time - which can be before or after 2010.

Note that hours-of-the-day graph section headings can also be written to explicitly reference the associated cycling module:

 
[scheduling] 
    [[dependencies]] 
        [[[HoursOfTheDay(0,6,12,18)]]] # same as [[[0,6,12,18]]] 
            graph = "red => blue"

How Multiple Graph Strings Combine For a cycling graph with multiple validity sections for different hours of the day, the different sections add to generate the complete graph. Different graph sections can overlap (i.e. the same hours may appear in multiple section headings) and the same tasks may appear in multiple sections, but individual dependencies should be unique across the entire graph. For example, the following graph defines a duplicate prerequisite for task C:

 
[scheduling] 
    [[dependencies]] 
        [[[0,6,12,18]]] 
            graph = "A => B => C" 
        [[[6,18]]] 
            graph = "B => C => X" 
            # duplicate prerequisite: B => C already defined at 6, 18

This does not affect scheduling, but for the sake of clarity and brevity the graph should be written like this:

 
[scheduling] 
    [[dependencies]] 
        [[[0,6,12,18]]] 
            graph = "A => B => C" 
        [[[6,18]]] 
            # X triggers off C only at 6 and 18 hours 
            graph = "C => X"

Combined Asynchronous And Synchronous Graphs Cycling tasks can be made to wait on one-off asynchronous tasks, as shown in Figure 23. Alternatively, they can be made to wait on one-off synchronous start-up tasks, which have an associated cycle time even though they are non-cycling - see Figure 24.

Synchronous Start-up vs One-off Asynchronous Tasks One-off synchronous start-up tasks run only when a cycling suite is cold-started and they are often associated with subsequent one-off cold-start tasks used to bootstrap a cycling suite into existence.

The distinction between cold- and warm-start is only meaningful for cycling tasks, and one-off asynchronous tasks may be best used in constructing entirely non-cycling suites.

However, one-off asynchronous tasks can precede cycling tasks in the same suite, as shown above. It seems likely that, if used in this way, they will be intended as start-up tasks - so currently one-off asynchronous tasks only run in a cold-start.


 
title = one-off async and cycling tasks 
# (with dependence between cycles too) 
[scheduling] 
    [[dependencies]] 
        graph = "prep1 => prep2" 
        [[[0,12]]] 
            graph = """ 
    prep2 => foo => bar & baz => qux 
    foo[T-12] => foo 
                    """

PIC


Figure 23: One-off asynchronous and cycling tasks in the same suite.



 
title = one-off start-up and cycling tasks 
# (with dependence between cycles too) 
[scheduling] 
    [[special tasks]] 
        start-up = prep1, prep2 
    [[dependencies]] 
        [[[0,12]]] 
            graph = """ 
    prep1 => prep2 => foo => bar & baz => qux 
    foo[T-12] => foo 
                    """

PIC


Figure 24: One-off synchronous and cycling tasks in the same suite.


Repeating Asynchronous Tasks Repeating asynchronous tasks can be used, for example, to process satellite data that arrives at irregular time intervals. Each new dataset must have a unique “asynchronous ID”. If it doesn’t naturally have such an ID a string representation of the data arrival time could be used. The graph VALIDITY section heading must contain “ASYNCID:” followed by a regular expression that matches the actual IDs. Additionally, one task in the suite must be a designated “daemon” that waits indefinitely on incoming data and reports each new dataset (and its ID) back to the suite by means of a special output message. When the daemon task proxy receives a matching message it dynamically registers a new output (containing the ID) that downstream tasks can then trigger off. The downstream tasks likewise have prerequisites containing the ID pattern (because they trigger off the aforementioned outputs) and when these get satisfied during dependency negotiation the actual ID is substituted into their own registered outputs. Finally, each asynchronous repeating task proxy passes the ID to its task execution environment as $ASYNCID to allow identification of the correct dataset by task scripts. In this way a tree of tasks becomes dedicated to processing each new dataset, and multiple datasets can be processed in parallel if they become available in quick succession. As Figure 25 shows, a repeating asynchronous suite currently plots just like a one-off asynchronous suite. But at run time the daemon task stays put, while the others continually spawn successors to wait for new datasets to come in. The asynchronous.repeating example suite demonstrates how to do this in a real suite. Note that other trigger types (success, failure, start, suicide, and conditional) cannot currently be used in a repeating asynchronous graph section.


 
title = a suite of repeating asynchronous tasks 
# for processing real time satellite datasets 
[scheduling] 
    [[dependencies]] 
        [[[ASYNCID:satX-\d{6}]]] 
            # match datasets satX-1424433 (e.g.) 
            graph = "watcher:a => foo:a & bar:a => baz" 
            daemon = watcher 
[runtime] 
    [[watcher]] 
        [[[outputs]]] 
            a = "New dataset <ASYNCID> ready for processing" 
    [[foo,bar]] 
        [[[outputs]]] 
            a = "Products generated from dataset <ASYNCID>"

PIC


Figure 25: Repeating Asynchronous Tasks.


9.3.4 Trigger Types

Trigger type, indicated by :type after the upstream task (or family) name, determines what kind of event results in the downstream task (or family) triggering.

Success Triggers The default, with no trigger type specified, is to trigger off the upstream task succeeding:

 
# B triggers if A SUCCEEDS: 
    graph = "A => B"

For consistency and completeness, however, the success trigger can be explicit:

 
# B triggers if A SUCCEEDS: 
    graph = "A => B" 
# or: 
    graph = "A:succeed => B"

Failure Triggers To trigger off the upstream task reporting failure:

 
# B triggers if A FAILS: 
    graph = "A:fail => B"

Section 9.3.4.8 (Suicide Triggers) shows one way of handling task B here if A does not fail.

Start Triggers To trigger off the upstream task starting to execute:

 
# B triggers if A STARTS EXECUTING: 
    graph = "A:start => B"

This can be used to trigger tasks that monitor other tasks once they (the target tasks) start executing. Consider a long-running forecast model, for instance, that generates a sequence of output files as it runs. A postprocessing task could be launched with a start trigger on the model (model:start => post) to process the model output as it becomes available. Note, however, that there are several alternative ways of handling this scenario: both tasks could be triggered at the same time (foo => model & post), but depending on external queue delays this could result in the monitoring task starting to execute first; or a different postprocessing task could be triggered off an internal output for each data file (model:out1 => post1 etc.; see Section 9.3.4.5), but this may not be practical if the number of output files is large or if it is difficult to add cylc messaging calls to the model.

Finish Triggers To trigger off the upstream task succeeding or failing, i.e. finishing one way or the other:

 
# B triggers if A either SUCCEEDS or FAILS: 
    graph = "A | A:fail => B" 
# or 
    graph = "A:finish => B"

Internal (Message) Triggers These allow triggering off off events that occur while a task runs. A special event message must be registered in the suite definition, and deliberately sent by the task at the appropriate time.

 
[scheduling] 
    [[dependencies]] 
        [[[6,18]]] 
            # B triggers off internal output "upload1" of task A: 
            graph = "A:upload1 => B" 
[runtime] 
    [[A]] 
        [[[outputs]]] 
            upload1 = "NWP products uploaded for [T]"

Task A must emit this message when the actual output has been completed - see Reporting Internal Outputs Completed (Section 10.3).

Job Submission Triggers It is also possible to trigger off a task submitting, or failing to submit:

 
# B triggers if A submits successfully: 
    graph = "A:submit => B" 
# D triggers if C fails to submit successfully: 
    graph = "C:submit-fail => D"

A possible use case for submit-fail triggers: if a task goes into the submit-failed state, possibly after several job submission retries, another task that inherits the same runtime but sets a different job submission method and/or host could be triggered to, in effect, run the same job on a different platform.

Conditional Triggers AND operators (&) can appear on both sides of an arrow. They provide a concise alternative to defining multiple triggers separately:

 
# 1/ this: 
    graph = "A & B => C" 
# is equivalent to: 
    graph = """A => C 
               B => C""" 
# 2/ this: 
    graph = "A => B & C" 
# is equivalent to: 
    graph = """A => B 
               A => C""" 
# 3/ and this: 
    graph = "A & B => C & D" 
# is equivalent to this: 
    graph = """A => C 
               B => C 
               A => D 
               B => D"""

OR operators (|) which result in true conditional triggers, can only appear on the left,2

 
# C triggers when either A or B finishes: 
    graph = "A | B => C"

Forecasting suites typically have simple conditional triggering requirements, but any valid conditional expression can be used, as shown in Figure 26 (conditional triggers are plotted with open arrow heads).


 
        graph = """ 
# D triggers if A or (B and C) succeed 
A | B & C => D 
# just to align the two graph sections 
D => W 
# Z triggers if (W or X) and Y succeed 
(W|X) & Y => Z 
                """

PIC


Figure 26: Conditional triggers are plotted with open arrow heads.


Suicide Triggers Suicide triggers take tasks out of the suite. This can be used for automated failure recovery. The suite.rc listing and accompanying graph in Figure 27 show how to define a chain of failure recovery tasks that trigger if they’re needed but otherwise remove themselves from the suite (you can run the AutoRecover.async example suite to see how this works). The dashed graph edges ending in solid dots indicate suicide triggers, and the open arrowheads indicate conditional triggers as usual.


 
title = asynchronous automated recovery 
description = """ 
Model task failure triggers diagnosis 
and recovery tasks, which take themselves 
out of the suite if model succeeds. Model 
post processing triggers off model OR 
recovery tasks. 
              """ 
[scheduling] 
    [[dependencies]] 
        graph = """ 
pre => model 
model:fail => diagnose => recover 
model => !diagnose & !recover 
model | recover => post 
                """ 
[runtime] 
    [[model]] 
        # UNCOMMENT TO TEST FAILURE: 
        # command scripting = /bin/false

PIC


Figure 27: Automated failure recovery via suicide triggers.


Note that multiple suicide triggers combine in the same way as other triggers, so this:

 
foo => !baz 
bar => !baz

is equivalent to this:

 
foo & bar => !baz

i.e. both foo and bar must succeed for baz to be taken out of the suite. If you really want a task to be taken out if any one of several events occurs then be careful to write it that way:

 
foo | bar => !baz

Family Triggers Families defined by the namespace inheritance hierarchy (Section 9.4) can be used in the graph trigger whole groups of tasks at the same time (e.g. forecast model ensembles and groups of tasks for processing different observation types at the same time) and for triggering downstream tasks off families as a whole. Higher level families, i.e. families of families, can also be used, and are reduced to the lowest level member tasks. Note that tasks can also trigger off individual family members if necessary.

To trigger an entire task family at once:

 
[scheduling] 
    [[dependencies]] 
        graph = "foo => fam" 
[runtime] 
    [[fam]]    # a family (because others inherit from it) 
    [[m1,m2]]  # family members (inherit from namespace fam) 
        inherit = fam

This is equivalent to:

 
[scheduling] 
    [[dependencies]] 
        graph = "foo => m1 & m2" 
[runtime] 
    [[fam]] 
    [[m1,m2]] 
        inherit = fam

To trigger other tasks off families we have to specify whether to triggering off all members starting, succeeding, failing, or finishing, or off any members (doing the same). Legal family triggers are thus:

 
[scheduling] 
    [[dependencies]] 
        graph = """ 
      # all-member triggers: 
    fam:start-all => one 
    fam:succeed-all => one 
    fam:fail-all => one 
    fam:finish-all => one 
      # any-member triggers: 
    fam:start-any => one 
    fam:succeed-any => one 
    fam:fail-any => one 
    fam:finish-any => one 
                """

Here’s how to trigger downstream processing after if one or more family members succeed, but only after all members have finished (succeeded or failed):

 
[scheduling] 
    [[dependencies]] 
        graph = """ 
    fam:finish-all & fam:succeed-any => foo 
                """

Intercycle Triggers Typically most tasks in a suite will trigger off others in the same cycle time, but some may depend on others with other cycle times. This notably applies to warm-cycled forecast models, which depend on their own previous instances (see below); but other kinds of intercycle dependence are possible too.3 Here’s how to express this kind of relationship in cylc:

 
[dependencies] 
    [[0,6,12,18]] 
        # B triggers off A in the previous cycle 
        graph = "A[T-6] => B"

Intercycle and trigger type (and internal output) notation can be combined:

 
    # B triggers if A in the previous cycle fails: 
    graph = "A[T-6]:fail => B"

At suite start-up inter-cycle triggers refer to a previous cycle that does not exist. This does not cause the dependent task to wait indefinitely, however, because cylc ignores triggers that reach back beyond the initial cycle time. That said, the presence of an inter-cycle trigger does normally imply that something special has to happen at start-up. If a model depends on its own previous instance for restart files, for instance, then an initial set of restart files has to be generated somehow or the first model task will presumably fail with missing input files. There are several ways to handle this in cylc using different kinds of one-off (non-cycling) tasks that run at suite start-up. They are illustrated in Tutorial Section 7.23.1; to summarize here briefly:

The first two cases are the same, except that start-up tasks are assigned a cycle time (even thought they don’t cycle) whereas asynchronous tasks are not. In the first cycle the previous-cycle trigger is ignored and the first cycling tasks trigger off the initial tasks; subsequently dependence on the initial tasks is ignored and the inter-cycle trigger takes effect. Cold-start tasks, on the other hand, can be used for real model cold-start processes, whereby a warm-cycled model at any given cycle time can in principle have its inputs satisfied by a previous instance of itself, or by a cold-start task with (nominally) the same cycle time. In effect, the cold-start task masquerades as the previous-cycle trigger of its associated cycling task. At suite start-up cold-start tasks will trigger the first cycling tasks, and thereafter the inter-cycle trigger will take effect. Unlike for asynchronous and start-up initial tasks, however, the cold-start “OR” construct means that cold-start triggers don’t have to be ignored by cylc after the first cycle, so it is possible to insert cold-start task into a suite mid-run to do mid-stream cold-starts after problems that preclude continued normal warm cycling.

One-off initial tasks can invoke real processing to generate the files that are subsequently produced by a tasks in the previous cycle; or they could be dummy tasks that represent some external process that does the same before the suite is started - in which case the initial task can just report itself successfully completed after checking that the required files are present.

Warm-Starting Suites For suites with inter-cycle dependence a warm-start is essentially an implicit restart. Rather than loading tasks from a previous recorded suite state, it loads all cycling tasks at a given cycle time assuming that the previous cycle was completed in an earlier suite run. Any initial tasks - asynchronous, start-up, or cold-start - therefore do not need to run again. Dependence on tasks from before the start cycle is still ignored, but cold-start tasks have to be loaded in the succeeded state because dependence on them (in the cold-start OR construct) is retained throughout the suite run as is explained above in Section 9.3.4.10.

Future Triggers Cylc also supports inter-cycle triggering off tasks in the future (with respect to cycle time!):

 
[dependencies] 
    [[0,6,12,18]] 
        # B triggers off A in the next cycle 
        graph = "A[T+6] => B"

In contrast to normal inter-cycle triggers, future triggers present a problem at the suite stop time rather than at start-up - in the final cycle B wants to to trigger off A at a future cycle time that does not exist. To avoid this problem cylc prevents tasks from spawning successors that depend on tasks in a non-existent future cycle.

9.3.5 Model Restart Dependencies

Warm cycled forecast models generate restart files, e.g. model background fields, that are required to initialize the next forecast (this is essentially the definition of “warm cycling”). In fact restart files will often be written for a whole series of subsequent cycles in case the next cycle (or the next and the next-next, and so on) cycle has to be omitted:

 
[scheduling] 
    [[special tasks]] 
        sequential = A 
    [[dependencies]] 
        [[[0,6,12,18]]] 
            # Model A cold-start and restart dependencies: 
            graph = "ColdA | A[T-6] | A[T-12] | A[T-18] | A[T-24] => A"

In other words, task A can trigger off a cotemporal cold-start task, or off its own previous instance, or off the instance before that, and so on. Restart dependencies are unusual because although A could trigger off A[T-12] we don’t actually want it to do so unless A[T-6] fails and can’t be fixed. This is why Task A, above, is declared to be ‘sequential’.4 Sequential tasks do not spawn a successor until they have succeeded (by default, tasks spawn as soon as they start running in order to get maximum functional parallelism in a suite) which means that A[T+6] will not be waiting around to trigger off an older predecessor while A[T] is still running. If A[T] fails though, the operator can force it, on removal, to spawn A[T+6], whose restart dependencies will then automatically be satisfied by the older instance, A[T-6].

Forcing a model to run sequentially means, of course, that its restart dependencies cannot be violated anyway, so we might just ignore them. This is certainly an option, but it should be noted that there are some benefits to having your suite reflect all of the real dependencies between the tasks that it is managing, particularly for complex multi-model operational suites in which the suite operator might not be an expert on the models. Consider such a suite in which a failure in a driving model (e.g. weather) precludes running one or more cycles of the downstream models (sea state, storm surge, river flow, …). If the real restart dependencies of each model are known to the suite, the operator can just do a recursive purge to remove the subtree of all tasks that can never run due to the failure, and then cold-start the failed driving model after a gap (skipping as few cycles as possible until the new cold-start input data are available). After that the downstream models will kick off automatically so long as the gap is spanned by their respective restart files, because their restart dependencies will automatically be satisfied by the older pre-gap instances in the suite. Managing this kind of scenario manually in a complex suite can be quite difficult.

Finally, if a warm cycled model is declared to have explicit restart outputs, and is not declared to be sequential, and you define appropriate labeled restart outputs which must contain the word ‘restart’, then the task will spawn as soon its last restart output is completed so that successives instances of the task will be able to overlap (i.e. run in parallel) if the opportunity arises. Whether or not this is worth the effort depends on your needs.

 
[scheduling] 
    [[special tasks]] 
        explicit restart outputs = A 
    [[dependencies]] 
        [[[0,6,12,18]]] 
            graph = "ColdA | A[T-18]:res18 | A[T-12]:res12| A[T-6]:res6 => A" 
[runtime] 
    [[A]] 
        [[[outputs]]] 
            r6  = restart files completed for [T+6] 
            r12 = restart files completed for [T+12] 
            r18 = restart files completed for [T+18]

9.4 Runtime - Task Configuration

The [runtime] section of a suite definition configures what to execute (and where and how to execute it) when each task is ready to run, in a multiple inheritance hierarchy of namespaces culminating in individual tasks. This allows all common configuration detail to be factored out and defined in one place.

Any namespace can configure any or all of the items defined in the Suite.rc Reference, Appendix A.

Namespaces that do not explicitly inherit from others automatically inherit from the root namespace (below).

Nested namespaces define task families that can be used in the graph as convenient shorthand for triggering all member tasks at once, or for triggering other tasks off all members at once - see Family Triggers, Section 9.3.4.9. Nested namespaces can be progressively expanded and collapsed in the dependency graph viewer, and in the gcylc graph and tree views. Only the first parent of each namespace (as for single-inheritance) is used for suite visualization purposes.

9.4.1 Namespace Names

Namespace names may contain letters, digits, underscores, and hyphens.

Note that task names need not be hardwired into task implementations because task and suite identity can be extracted portably from the task execution environment supplied by cylc (Section 9.4.7) - then to rename a task you can just change its name in the suite definition.

9.4.2 Root - Runtime Defaults

The root namespace, at the base of the inheritance hierarchy, provides default configuration for all tasks in the suite. Most root items are unset by default, but some have default values sufficient to allow test suites to be defined by dependency graph alone. The command scripting item, for example, defaults to code that prints a message then sleeps for between 1 and 15 seconds and exits. Default values are documented with each item in Appendix A. You can override the defaults or provide your own defaults by explicitly configuring the root namespace.

9.4.3 Defining Multiple Namespaces At Once

If a namespace section heading is a comma-separated list of names then the subsequent configuration applies to each list member. Particular tasks can be singled out at run time using the $CYLC_TASK_NAME variable.

As an example, consider a suite containing an ensemble of closely related tasks that each invokes the same script but with a unique argument that identifies the calling task name:

 
[runtime] 
    [[ensemble]] 
        command scripting = "run-model.sh $CYLC_TASK_NAME" 
    [[m1, m2, m3]] 
        inherit = ensemble

For large ensembles Jinja2 template processing can be used to automatically generate the member names and associated dependencies (see Section 9.6).

9.4.4 Runtime Inheritance - Single

The following listing of the inherit.single.one example suite illustrates basic runtime inheritance with single parents.

 
# SUITE.RC 
title = "User Guide [runtime] example." 
[cylc] 
    required run mode = simulation # (no task implementations) 
[scheduling] 
    initial cycle time = 2011010106 
    final cycle time = 2011010200 
    [[dependencies]] 
        graph = """foo => OBS 
             OBS:succeed-all => bar""" 
[runtime] 
    [[root]] # base namespace for all tasks (defines suite-wide defaults) 
        [[[job submission]]] 
            method = at_now 
        [[[environment]]] 
            COLOR = red 
    [[OBS]]  # family (inherited by land, ship); implicitly inherits root 
        command scripting = run-${CYLC_TASK_NAME}.sh 
        [[[environment]]] 
            RUNNING_DIR = $HOME/running/$CYLC_TASK_NAME 
    [[land]] # a task (a leaf on the inheritance tree) in the OBS family 
        inherit = OBS 
        description = land obs processing 
    [[ship]] # a task (a leaf on the inheritance tree) in the OBS family 
        inherit = OBS 
        description = ship obs processing 
        [[[job submission]]] 
            method = loadleveler 
        [[[environment]]] 
            RUNNING_DIR = $HOME/running/ship  # override OBS environment 
            OUTPUT_DIR = $HOME/output/ship    # add to OBS environment 
    [[foo]] 
        # (just inherits from root) 
 
    # The task [[bar]] is implicitly defined by its presence in the 
    # graph; it is also a dummy task that just inherits from root.

9.4.5 Runtime Inheritance - Multiple

If a namespace inherits from multiple parents the linear order of precedence (which namespace overrides which) is determined by the so-called C3 algorithm used to find the linear method resolution order for class hierarchies in Python and several other object oriented programming languages. The result of this should be fairly obvious for typical use of multiple inheritance in cylc suites, but for detailed documentation of how the algorithm works refer to the official Python documentation here: http://www.python.org/download/releases/2.3/mro/.

The inherit.multi.one example suite, listed here, makes use of multiple inheritance:

 
 
title = "multiple inheritance example" 
 
description = """To see how multiple inheritance works: 
 
 % cylc list -tb[m] SUITE # list namespaces 
 % cylc graph -n SUITE # graph namespaces 
 % cylc graph SUITE # dependencies, collapse on first-parent namespaces 
 
  % cylc get-config --sparse --item [runtime]ops_s1 SUITE 
  % cylc get-config --sparse --item [runtime]var_p2 foo""" 
 
[scheduling] 
    [[dependencies]] 
        graph = "OPS:finish-all => VAR" 
 
[runtime] 
    [[root]] 
    [[OPS]] 
        command scripting = echo "RUN: run-ops.sh" 
    [[VAR]] 
        command scripting = echo "RUN: run-var.sh" 
    [[SERIAL]] 
        [[[directives]]] 
            job_type = serial 
    [[PARALLEL]] 
        [[[directives]]] 
            job_type = parallel 
    [[ops_s1, ops_s2]] 
        inherit = OPS, SERIAL 
 
    [[ops_p1, ops_p2]] 
        inherit = OPS, PARALLEL 
 
    [[var_s1, var_s2]] 
        inherit = VAR, SERIAL 
 
    [[var_p1, var_p2]] 
        inherit = VAR, PARALLEL 
 
[visualization] 
    # NOTE ON VISUALIZATION AND MULTIPLE INHERITANCE: overlapping 
    # family groups can have overlapping attributes, so long as 
    # non-conflictling attributes are used to style each group. Below, 
    # for example, OPS tasks are filled green and SERIAL tasks are 
    # outlined blue, so that ops_s1 and ops_s2 are green with a blue 
    # outline. But if the SERIAL tasks are explicitly styled as "not 
    # filled" (by setting "style=") this will override the fill setting 
    # in the (previously defined and therefore lower precedence) OPS 
    # group, making ops_s1 and ops_s2 unfilled with a blue outline. 
    # Alternatively you can just create a manual node group for ops_s1 
    # and ops_s2 and style them separately. 
    [[node groups]] 
        #(see comment above:) 
        #serial_ops = ops_s1, ops_s2 
    [[node attributes]] 
        OPS = "style=filled", "fillcolor=green" 
        SERIAL = "color=blue" #(see comment above:), "style=" 
        #(see comment above:) 
        #serial_ops = "color=blue", "style=filled", "fillcolor=green"
cylc get-config provides an easy way to check the result of inheritance in a suite. You can extract specific items, e.g.:
 
shell$ cylc get-config --item '[runtime][var_p2]command scripting' inherit.multi.one 
echo ‘‘RUN: run-var.sh''

or use the --sparse option to print entire namespaces without obscuring the result with the dense runtime structure obtained from the root namespace:

 
shell$ cylc get-config --sparse --item '[runtime]ops_s1' inherit.multi.one 
command scripting = echo ‘‘RUN: run-ops.sh'' 
inherit = ['OPS', 'SERIAL'] 
[directives] 
   job_type = serial

Suite Visualization And Multiple Inheritance The first parent inherited by a namespace is also used as the collapsible family group when visualizing the suite. If this is not what you want, you can demote the first parent for visualization purposes, without affecting the order of inheritance of runtime properties:

 
[runtime] 
    [[bar]] 
        # ... 
    [[foo]] 
        # inherit properties from bar, but stay under root for visualization: 
        inherit = None, bar

9.4.6 How Runtime Inheritance Works

The linear precedence order of ancestors is computed for each namespace using the C3 algorithm. Then any runtime items that are explicitly configured in the suite definition are “inherited” up the linearized hierachy for each task, starting at the root namespace: if a particular item is defined at multiple levels in the hiearchy, the level nearest the final task namespace takes precedence. Finally, root namespace defaults are applied for every item that has not been configured in the inheritance process (this is more efficient than carrying the full dense namespace structure through from root from the beginning).

9.4.7 Task Execution Environment

The task execution environment contains suite and task identity variables provided by cylc, and user-defined environment variables. The environment is explicitly exported (by the task job script) prior to executing task command scripting (see Task Job Submission, Section 11).

Suite and task identity are exported first, so that user-defined variables can refer to them. Order of definition is preserved throughout so that variable assignment expressions can safely refer to previously defined variables.

Additionally, access to cylc itself is configured prior to the user-defined environment, so that variable assignment expressions can make use of cylc utility commands:

 
[runtime] 
    [[foo]] 
        [[[environment]]] 
            REFERENCE_TIME = $( cylc util cycletime --offset-hours=6 )

User Environment Variables A task’s user-defined environment results from its inherited [[[environment]]] sections:

 
[runtime] 
    [[root]] 
        [[[environment]]] 
            COLOR = red 
            SHAPE = circle 
    [[foo]] 
        [[[environment]]] 
            COLOR = blue  # root override 
            TEXTURE = rough # new variable

This results in a task foo with SHAPE=circle, COLOR=blue, and TEXTURE=rough in its environment.

Overriding Environment Variables When you override inherited namespace items the original parent item definition is replaced by the new definition. This applies to all items including those in the environment sub-sections which, strictly speaking, are not “environment variables” until they are written, post inheritance processing, to the task job script that executes the associated task. Consequently, if you override an environment variable you cannot also access the original parent value:

 
[runtime] 
    [[foo]] 
        [[[environment]]] 
            COLOR = red 
    [[bar]] 
        inherit = foo 
        [[[environment]]] 
            tmp = $COLOR        # !! ERROR: $COLOR is undefined here 
            COLOR = dark-$tmp   # !! as this overrides COLOR in foo.

The compressed variant of this, COLOR = dark-$COLOR, is also in error for the same reason. To achieve the desired result you must use a different name for the parent variable:

 
[runtime] 
    [[foo]] 
        [[[environment]]] 
            FOO_COLOR = red 
    [[bar]] 
        inherit = foo 
        [[[environment]]] 
            COLOR = dark-$FOO_COLOR  # OK

Suite And Task Identity Variables The task identity variables provided to tasks by cylc are:

 
$CYLC_TASK_ID                    # X.2011051118 (e.g.) 
$CYLC_TASK_NAME                  # X 
$CYLC_TASK_CYCLE_TIME            # 2011051118 
$CYLC_TASK_LOG_ROOT              # ~/cylc-run/foo.bar.baz/log/job/X.2011051118.1 
$CYLC_TASK_NAMESPACE_HIERARCHY   # "root postproc X" (e.g.) 
$CYLC_TASK_TRY_NUMBER            # increments with automatic retry-on-fail 
$CYLC_TASK_WORK_DIR              # task work directory (see below) 
$CYLC_SUITE_SHARE_DIR            # suite (or task!) shared directory (see below) 
$CYLC_TASK_IS_COLDSTART          # 'True' for cold-start tasks, else 'False'

And the suite identity variables are:

 
$CYLC_SUITE_DEF_PATH   # $HOME/mysuites/baz (e.g.) 
$CYLC_SUITE_NAME       # foo.bar.baz (e.g.) 
$CYLC_SUITE_REG_PATH   # name translate to path: foo/bar/baz 
$CYLC_SUITE_HOST       # orca.niwa.co.nz (e.g.) 
$CYLC_SUITE_PORT       # 7766 (e.g.) 
$CYLC_SUITE_OWNER      # hilary (e.g.)

Some of these variables are also used by cylc task messaging commands in order to target the right task proxy object in the right suite.

Suite Share And Task Work Directories A suite share directory is created automatically for use as a file exchange area for tasks on same task host. It can be accessed via $CYLC_SUITE_SHARE_DIR and its location can be set in the cylc site and user config files.

A task work directory is also created automatically for each task, and can be accessed via the $CYLC_TASK_WORK_DIR variable. Task command scripting is executed from within the work directory (i.e. it is the task’s current working directory). For non-detaching tasks the work directory is automatically removed again if it is empty when the task finishes. The main work directory location is set in the cylc site and user config files, but the lowest-level sub-directory, which name defaults to the task ID to give each task a unique workspace, can be overridden under [runtime] in suite definitions. This enables groups of tasks that read and write files from their current working directories to be given common work directories as file share spaces.

Other Cylc-Defined Environment Variables Initial and final cycle times, if supplied via the suite.rc file or the command line, are passed to task execution environments as:

 
$CYLC_SUITE_INITIAL_CYCLE_TIME 
$CYLC_SUITE_FINAL_CYCLE_TIME

Tasks can use these to determine whether or not they are running in the first or final cycles.

Environment Variable Evaluation Variables in the task execution environment are not evaluated in the shell in which the suite is running prior to submitting the task. They are written in unevaluated form to the job script that is submitted by cylc to run the task (Section 11.2) and are therefore evaluated when the task begins executing under the task owner account on the task host. Thus $HOME, for instance, evaluates at run time to the home directory of task owner on the task host.

9.4.8 How Tasks Get Access To The Suite Directory

Tasks can use $CYLC_SUITE_DEF_PATH to access suite files on the task host, and the suite bin directory is automatically added $PATH. If a remote suite definition directory is not specified the local (suite host) path will be assumed with the local home directory, if present, swapped for literal $HOME for evaluation on the task host.

9.4.9 Remote Task Hosting

If a task declares an owner other than the suite owner and/or a host other than the suite host, cylc will use passwordless ssh to execute the task on the owner@host account by the configured job submission method,

 
[runtime] 
    [[foo]] 
        [[[remote]]] 
            host = orca.niwa.co.nz 
            owner = bob 
        [[[job submission]]] 
            method = pbs

For this to work,

To learn how to give remote tasks access to cylc, see Section 12.6.

Tasks running on the suite host under another user account are treated as remote tasks.

Remote hosting, like all namespace settings, can be declared globally in the root namespace, or per family, or for individual tasks.

Dynamic Host Selection Instead of hardwiring host names into the suite definition you can specify a shell command that prints a hostname, or an environment variable that holds a hostname, as the value of the host config item. See Section A.4.1.19.1.

Remote Task Log Directories Task stdout and stderr streams are written to log files in a suite-specific sub-directory of the suite run directory, as explained in Section 11.4. For remote tasks the same directory is used, but on the task host. Remote task log directories, like local ones, are created on the fly, if necessary, during job submission.

9.5 Visualization

The visualization section of a suite definition is used to configure suite graphing, principally graph node (task) and edge (dependency arrow) style attributes. Tasks can be grouped for the purpose of applying common style attributes. See the suite.rc reference (Appendix A) for details.

9.5.1 Collapsible Families In Suite Graphs
 
[visualization] 
    collapsed families = family1, family2

Nested families from the runtime inheritance hierarchy can be expanded and collapsed in suite graphs and the gcylc graph view. All families are displayed in the collapsed state at first, unless [visualization]collapsed families is used to single out specific families for initial collapsing.

In the gcylc graph view, nodes outside of the main graph (such as the members of collapsed families) are plotted as rectangular nodes to the right if they are doing anything interesting (submitted, running, failed).

Figure 28 illustrates successive expansion of nested task families in the namespaces example suite.


PIC

PIC

PIC

PIC

PIC

PIC


Figure 28: Graphs of the namespaces example suite showing various states of expansion of the nested namespace family hierarchy, from all families collapsed (top left) through to all expanded (bottom right). This can also be done by right-clicking on tasks in the gcylc graph view.


9.6 Jinja2

Cylc has built in support for the Jinja2 template processor in suite definitions. Jinja2 variables, mathematical expressions, loop control structures, conditional logic, etc., are automatically processed to generate the final suite definition seen by cylc.

The need for Jinja2 processing must be declared with a hash-bang comment as the first line of the suite.rc file:

 
#!jinja2 
# ...

Potential uses for this include automatic generation of repeated groups of similar tasks and dependencies, and inclusion or exclusion of entire suite sections according to the value of a single flag. Consider a large complicated operational suite and several related parallel test suites with slightly different task content and structure (the parallel suites, for instance, might take certain large input files from the operation or the archive rather than downloading them again) - these can now be maintained as a single master suite definition that reconfigures itself according to the value of a flag variable indicating the intended use.

Template processing is the first thing done on parsing a suite definition so Jinja2 expressions can appear anywhere in the file (inside strings and namespace headings, for example).

Jinja2 is well documented at http://jinja.pocoo.org/docs, so here we just provide an example suite that uses it. The meaning of the embedded Jinja2 code should be reasonably self-evident to anyone familiar with standard programming techniques.


PIC


Figure 29: The Jinja2 ensemble example suite graph.


The jinja2.ensemble example, graphed in Figure 29, shows an ensemble of similar tasks generated using Jinja2:

 
#!jinja2 
{% set N_MEMBERS = 5 %} 
[scheduling] 
    [[dependencies]] 
        graph = """{# generate ensemble dependencies #} 
            {% for I in range( 0, N_MEMBERS ) %} 
               foo => mem_{{ I }} => post_{{ I }} => bar 
            {% endfor %}"""

Here is the generated suite definition, after Jinja2 processing:

 
#!jinja2 
[scheduling] 
    [[dependencies]] 
        graph = """ 
          foo => mem_0 => post_0 => bar 
          foo => mem_1 => post_1 => bar 
          foo => mem_2 => post_2 => bar 
          foo => mem_3 => post_3 => bar 
          foo => mem_4 => post_4 => bar 
                """

And finally, the jinja2.cities example uses variables, includes or excludes special cleanup tasks according to the value of a logical flag, and it automatically generates all dependencies and family relationships for a group of tasks that is repeated for each city in the suite. To add a new city and associated tasks and dependencies simply add the city name to list at the top of the file. The suite is graphed, with the New York City task family expanded, in Figure 30.

 
#!Jinja2 
 
title = "Jinja2 city suite example." 
description = """ 
Illustrates use of variables and math expressions, and programmatic 
generation of groups of related dependencies and runtime properties.""" 
 
{% set HOST = "SuperComputer" %} 
{% set CITIES = 'NewYork', 'Philadelphia', 'Newark', 'Houston', 'SantaFe', 'Chicago' %} 
{% set CITYJOBS = 'one', 'two', 'three', 'four' %} 
{% set LIMIT_MINS = 20 %} 
 
{% set CLEANUP = True %} 
 
[scheduling] 
    [[ dependencies ]] 
{% if CLEANUP %} 
        [[[23]]] 
            graph = "clean" 
{% endif %} 
        [[[0,12]]] 
            graph = """ 
                    setup => get_lbc & get_ic # foo 
{% for CITY in CITIES %} {# comment #} 
                    get_lbc => {{ CITY }}_one 
                    get_ic => {{ CITY }}_two 
                    {{ CITY }}_one & {{ CITY }}_two => {{ CITY }}_three & {{ CITY }}_four 
{% if CLEANUP %} 
                    {{ CITY }}_three & {{ CITY }}_four => cleanup 
{% endif %} 
{% endfor %} 
                    """ 
[runtime] 
    [[on_{{ HOST }} ]] 
        [[[remote]]] 
            host = {{ HOST }} 
            # (remote cylc directory is set in site/user config for this host) 
        [[[directives]]] 
            wall_clock_limit = "00:{{ LIMIT_MINS|int() + 2 }}:00,00:{{ LIMIT_MINS }}:00" 
 
{% for CITY in CITIES %} 
    [[ {{ CITY }} ]] 
        inherit = on_{{ HOST }} 
{% for JOB in CITYJOBS %} 
    [[ {{ CITY }}_{{ JOB }} ]] 
        inherit = {{ CITY }} 
{% endfor %} 
{% endfor %} 
 
[visualization] 
    initial cycle time = 2011080812 
    final cycle time = 2011080823 
    [[node groups]] 
        cleaning = clean, cleanup 
    [[node attributes]] 
        cleaning = 'style=filled', 'fillcolor=yellow' 
        NewYork = 'style=filled', 'fillcolor=lightblue'

PIC


Figure 30: The Jinja2 cities example suite graph, with the New York City task family expanded.

9.6.1 Accessing Environment Variables With Jinja2

This functionality is not provided by Jinja2 by default, but cylc automatically imports the user environment to the template in a dictionary structure called environ. A usage example:

 
#!Jinja2 
#... 
[runtime] 
    [[root]] 
        [[[environment]]] 
            SUITE_OWNER_HOME_DIR_ON_SUITE_HOST = {{environ['HOME']}}

This example is emphasizes that the environment is read on the suite host at the time the suite definition is parsed - it is not, for instance, read at task run time on the task host.

9.6.2 Custom Jinja2 Filters

Jinja2 variable values can be modified by “filters”, using pipe notation. For example, the built-in trim filter strips leading and trailing white space from a string:

 
{% set MyString = "   dog   " %} 
{{ MyString | trim() }}  # "dog"

(See official Jinja2 documentation for available built-in filters.)

Cylc also supports custom Jinja2 filters. A custom filter is a single Python function in a source file with the same name as the function (plus “.py” extension) and stored in one of the following locations:

In the filter function argument list, the first argument is the variable value to be “filtered”, and subsequent arguments can be whatever is needed. Currently there is one custom filter called “pad” in the central cylc Jinja2 filter directory, for padding string values to some constant length with a fill character - useful for generating task names and related values in ensemble suites:

 
{% for i in range(0,100) %}  # 0, 1, ..., 99 
    {% set j = i | pad(2,'0') %} 
    A_{{j}}          # A_00, A_01, ..., A_99 
{% endfor %}

9.6.3 Associative Arrays In Jinja2

Associative arrays (dicts in Python) can be very useful. Here’s an example, from $CYLC_DIR/examples/jinja2/dict:

 
#!Jinja2 
{% set obs_types = ['airs', 'iasi'] %} 
{% set resource = { 'airs':'ncpus=9', 'iasi':'ncpus=20' } %} 
 
[scheduling] 
    [[dependencies]] 
        graph = "obs" 
[runtime] 
    [[obs]] 
        [[[job submission]]] 
            method = pbs 
    {% for i in obs_types %} 
    [[ {{i}} ]] 
        inherit = obs 
        [[[directives]]] 
             -I = {{ resource[i] }} 
     {% endfor %}

Here’s the result:

 
shell$ cylc get-config -i [runtime][airs]directives SUITE 
-I = ncpus=9

9.6.4 Jinja2 Default Values And Template Inputs

The values of Jinja2 variables can be passed in from the cylc command line rather than hardwired in the suite definition. Here’s an example, from $CYLC_DIR/examples/jinja2/defaults:

 
#!Jinja2 
 
title = "Jinja2 example: use of defaults and external input" 
 
description = """ 
The template variable FIRST_TASK must be given on the cylc command line 
using --set or --set-file=FILE; two other variables, LAST_TASK and 
N_MEMBERS can be set similarly, but if not they have default values.""" 
 
{% set LAST_TASK = LAST_TASK | default( 'baz' ) %} 
{% set N_MEMBERS = N_MEMBERS | default( 3 ) | int %} 
 
{# input of FIRST_TASK is required - no default #} 
 
[scheduling] 
    initial cycle time = 2010080800 
    final cycle time   = 2010081600 
    [[dependencies]] 
        [[[0]]] 
            graph = """{{ FIRST_TASK }} => ens 
                 ens:succeed-all => {{ LAST_TASK }}""" 
[runtime] 
    [[ens]] 
{% for I in range( 0, N_MEMBERS ) %} 
    [[ mem_{{ I }} ]] 
        inherit = ens 
{% endfor %}

Here’s the result:

 
shell$ cylc list SUITE 
Jinja2 Template Error 
'FIRST_TASK' is undefined 
cylc-list foo  failed:  1 
 
shell$ cylc list --set FIRST_TASK=bob foo 
bob 
baz 
mem_2 
mem_1 
mem_0 
 
shell$ cylc list --set FIRST_TASK=bob --set LAST_TASK=alice foo 
bob 
alice 
mem_2 
mem_1 
mem_0 
 
list --set FIRST_TASK=bob --set N_MEMBERS=10 foo 
mem_9 
mem_8 
mem_7 
mem_6 
mem_5 
mem_4 
mem_3 
mem_2 
mem_1 
mem_0 
baz 
bob

Note also that cylc view --set FIRST_TASKbob –jinja2 SUITE= will show the suite with the Jinja2 variables as set.

Warning: suites started with template variables set on the command line do not currently restart with the same settings - you have to set them again on the cylc restart command line.

9.7 Special Placeholder Variables

Several special variables are used as placeholders in cylc suite definitions:

To use proper variables (c.f. programming languages) in suite definitions, see the Jinja2 template processor (Section 9.6).

9.8 Omitting Tasks At Runtime

It is sometimes convenient to omit certain tasks from the suite at runtime without actually deleting their definitions from the suite.

Defining [runtime] properties for tasks that do not appear in the suite graph results in verbose-mode validation warnings that the tasks are disabled. They cannot be used because the suite graph is what defines their dependencies and valid cycle times. Nevertheless, it is legal to leave these orphaned runtime sections in the suite definition because it allows you to temporarily remove tasks from the suite by simply commenting them out of the graph.

To omit a task from the suite at runtime but still leave it fully defined and available for use (by insertion or cylc submit) use one or both of [scheduling][[special task]] lists, include at start-up or exclude at start-up (documented in Sections A.3.5.8 and A.3.5.7). Then the graph still defines the validity of the tasks and their dependencies, but they are not actually inserted into the suite at start-up. Other tasks that depend on the omitted ones, if any, will have to wait on their insertion at a later time or otherwise be triggered manually.

Finally, with Jinja2 (Section 9.6) you can radically alter suite structure by including or excluding tasks from the [scheduling] and [runtime] sections according to the value of a single logical flag defined at the top of the suite.

9.9 Naked Dummy Tasks And Strict Validation

A naked dummy task appears in the suite graph but has no explicit runtime configuration section. Such tasks automatically inherit the default “dummy task” configuration from the root namespace. This is very useful because it allows functional suites to be mocked up quickly for test and demonstration purposes by simply defining the graph. It is somewhat dangerous, however, because there is no way to distinguish an intentional naked dummy task from one generated by typographic error: misspelling a task name in the graph results in a new naked dummy task replacing the intended task in the affected trigger expression; and misspelling a task name in a runtime section heading results in the intended task becoming a dummy task itself (by divorcing it from its intended runtime config section).

To avoid this problem any dummy task used in a real suite should not be naked - i.e. it should have an explicit entry in under the runtime section of the suite definition, even if the section is empty. This results in exactly the same dummy task behaviour, via implicit inheritance from root, but it allows use of cylc validate --strict to catch errors in task names by failing the suite if any naked dummy tasks are detected.

10 Task Implementation

 10.1 Inlined Tasks
 10.2 Returning Proper Error Status
 10.3 Reporting Internal Outputs
 10.4 Other Task Messages
 10.5 Detaching Tasks

Existing tasks (models, scripts, etc.) can be used by cylc without any modification, with the following few exceptions:

10.1 Inlined Tasks

Simple tasks can be entirely implemented within the suite.rc file - task command scripting can be a multi-line string.

10.2 Returning Proper Error Status

Tasks should abort with non-zero exit status if a fatal error occurs (this is just standard coding practice anyway). This allows cylc’s task job scripts to automatically trap errors and send a cylc task failed message back to the suite. The shell set -e option can be used in lieu of explicit error checks for every command:

 
#!/bin/bash 
set -e  # abort on error 
mkdir /illegal/dir  # this will abort the script with error status

10.3 Reporting Internal Outputs

If a task has internal outputs that others need to trigger off then it must report completion of those outputs at the appropriate time. Output messages must be unique within the suite or else downstream tasks will trigger off whichever task happens to send the message first; they must exactly match the corresponding outputs registered for the task in the suite definition; and for cycling tasks they must contain the cycle time in order to distinguish between the same outputs of the same task at other cycle times.

The “outputs” example is a self-contained suite that illustrates this:

 
title = "triggering off internal task outputs" 
 
description = """ 
This is a self-contained example (task implementation, including output 
messaging, is entirely contained within the suite definition).""" 
 
[scheduling] 
    initial cycle time = 2010080806 
    final cycle time = 2010080812 
    [[dependencies]] 
        [[[0,12]]] 
          graph = """ 
            foo:out1 => bar 
            foo:out2 => baz 
                  """ 
[runtime] 
    [[foo]] 
        command scripting = """ 
echo HELLO 
sleep 10 
cylc message "foo uploaded file set 1 for $CYLC_TASK_CYCLE_TIME" 
sleep 10 
cylc message "foo $CYLC_TASK_NAME uploaded file set 2 for $CYLC_TASK_CYCLE_TIME" 
sleep 10 
echo BYE""" 
        [[[outputs]]] 
            # [T] is replaced by actual cycle time at run time: 
            out1 = "foo uploaded file set 1 for [T]" 
            out2 = "foo uploaded file set 2 for [T]"
Note the use of [T] as a placeholder for cycle time in messages registered under [[[outputs]]] these strings are held inside cylc for comparison with incoming task messages; they are never interpreted by the shell and may not contain shell environment variables. The actual messaging calls made by running tasks, on the other hand, can make use of variables in the task runtime environment.

10.4 Other Task Messages

General (non-output) messages can also be sent to report progress, warnings, and so on, e.g.:

 
#!/bin/bash 
# a warning message (this will be logged by the suite): 
cylc task message -p WARNING "oops, something's fishy here" 
# information (this will also be logged by the suite): 
cylc task message "Hello from task foo"

Explanatory messages can be sent before aborting on error:

 
#!/bin/bash 
set -e  # abort on error 
if ! mkdir /illegal/dir; then 
    # (use inline error checking to avoid triggering the above 'set -e') 
    cylc task message -p CRITICAL "Failed to create directory /illegal/dir" 
    exit 1 # now abort non-zero exit status to trigger the task failed message 
fi

Or equivalently, with different syntax:

 
#!/bin/bash 
set -e 
mkdir /illegal/dir || {  # inline error checking using OR operator 
    cylc task message -p CRITICAL "Failed to create directory /illegal/dir" 
    exit 1 
}

But not this:

 
#!/bin/bash 
set -e 
mkdir /illegal/dir  # aborted via 'set -e' 
if [[ $? != 0 ]]; then  # so this will never be reached. 
    cylc task message -p CRITICAL "Failed to create directory /illegal/dir" 
    exit 1 
fi

If critical errors are not reported in this way task failures will still be detected and logged by cylc, but you may have to examine task logs to determine what the problem was.

10.5 Detaching Tasks

If a task spawns another job internally and then detaches and exits without seeing the spawned process through, you must arrange for the detached process to send its own completion messages, because the cylc-generated job script cannot know when it is finished.

First check that you can’t “reconnect” the detaching process. If it is a background shell process, for instance, just run it in the foreground instead. For loadleveler jobs the -s option prevents llsubmit from returning until the job has completed. For Sun Grid Engine, qsub -sync yes has the same effect. Section 11.5 shows how to override job submission command template to achieve this.

If the detaching process cannot be reconnected, disable cylc’s automatic completion messaging:

 
[runtime] 
    [[foo]] 
        manual completion = False # this is a detching task

The cylc messaging commands are called like this:

 
#!/bin/bash 
# ... 
if $SUCCESS; then 
    # release my task lock and report success 
    cylc task succeeded 
    exit 0 
else 
    # release my task lock and report failed 
    cylc task failed "Input file X not found" 
    exit 1 
fi

They read environment variables that identify the calling task and the target suite, so the task execution environment must be propagated to the deatched process.

One way to handle this is to write a task wrapper that modifies a copy of the detaching native job scripts, on the fly, to insert completion messaging in the appropriate places. An advantage of this method is that you don’t need to permanently modify the model or its associated native scripting for cylc. Another is that you can configure the native job setup for a single test case (running it without cylc) and then have your custom wrapper modify the standalone test case on the fly with suite, task, and cycle-specific parameters as required.

To make this easier, for tasks that declare manual completion messaging cylc makes non user-defined environment scripting available in a variable $CYLC_SUITE_ENVIRONMENT, the value of which can be inserted at the appropriate point in the task scripts (just prior to calling the cylc messaging commands as above).5

10.5.1 Detaching Tasks And Polling

Another reason to avoid detaching tasks if possible is that they cannot be polled or killed because there is no way for cylc to determine the job ID of the detached process. Attempted polling of a detaching task will just result in cylc logging a warning message.

10.5.2 A Custom Detaching Task Wrapper Example

The detaching example suite contains a script model.sh that runs a pseudo model as follows:

 
#!/bin/bash 
set -e 
 
MODEL="sleep 10; true" 
#MODEL="sleep 10; false"  # uncomment to test model failure 
 
echo "model.sh: executing pseudo-executable" 
echo "model.sh: CYLC_VERSION is $CYLC_VERSION" 
eval $MODEL 
echo "model.sh: done"
this is in turn executed by a script run-model.sh that detaches immediately after job submission (i.e. it exits before the model executable actually runs):
 
#!/bin/bash 
set -e 
echo "run-model.sh: submitting model.sh to 'at now'" 
SCRIPT=model.sh  # location of the model job to submit 
OUT=$1; ERR=$2   # stdout and stderr log paths 
# submit the job and detach 
 
MY_TMPDIR=${CYLC_TMPDIR:-${TMPDIR:-/tmp}} 
 
RES=$MY_TMPDIR/atnow$$.txt 
( at now <<EOF 
$SCRIPT 1> $OUT 2> $ERR 
EOF 
) > $RES 2>&1 
if grep 'No atd running' $RES; then 
    echo 'ERROR: atd is not running!' 
    exit 1 
fi 
# model.sh should now be running at the behest of the 'at' scheduler. 
echo "run-model.sh: done"
Note that your at scheduler daemon must be up if you want to test this suite.

Here’s a cylc suite to run this unruly model:

 
title = "Cylc User Guide Custom Task Wrapper Example" 
 
description = """This suite runs a single task that internally submits a 
'model executable' before detaching and exiting immediately - so we have 
to handle task completion messaging manually - see the Cylc User Guide.""" 
 
[scheduling] 
    initial cycle time = 2011010106 
    final cycle time = 2011010200 
    [[special tasks]] 
        sequential = model 
    [[dependencies]] 
        [[[0,6,12,18]]] 
        graph = "model" 
 
[runtime] 
    [[model]] 
        manual completion = True 
        command scripting = model-wrapper.sh  # invoke the task via a custom wrapper 
        [[[environment]]] 
            # location of native job scripts to modify for this suite: 
            NATIVESCRIPTS = $CYLC_SUITE_DEF_PATH/native 
            # output path prefix for detached model stdout and stderr: 
            PREFIX = $CYLC_TASK_LOG_ROOT 
            FOO = "$HOME bar $PREFIX"
The suite invokes the task by means of the custom wrapper model-wrapper.sh which modifies, on the fly, a temporary copy of the model’s native job scripts as described above:
 
#/bin/bash 
set -e 
 
# A custom wrapper for the 'model' task from the detaching example suite. 
# See the Cylc User Guide for more information. 
 
# Check inputs: 
# location of pristine native job scripts: 
cylc util checkvars -d NATIVESCRIPTS 
# path prefix for model stdout and stderr: 
cylc util checkvars PREFIX 
 
MY_TMPDIR=${CYLC_TMPDIR:-${TMPDIR:-/tmp}} 
# Get a temporary copy of the native job scripts: 
TDIR=$MY_TMPDIR/detach$$ 
mkdir -p $TDIR 
cp $NATIVESCRIPTS/⋆ $TDIR 
 
# Insert task-specific execution environment in $TDIR/model.sh: 
SRCH='echo "model.sh: executing pseudo-executable"' 
perl -pi -e "s@^${SRCH}@${CYLC_SUITE_ENVIRONMENT}\n${SRCH}@" $TDIR/model.sh 
 
# Task completion message scripting. Use single quotes here - we don't 
# want the $? variable to evaluate in this shell! 
MSG=' 
if [[ $? != 0 ]]; then 
   cylc task message -p CRITICAL "ERROR: model executable failed" 
   exit 1 
else 
   cylc task succeeded 
   exit 0 
fi' 
# Insert error detection and cylc messaging in $TDIR/model.sh: 
SRCH='echo "model.sh: done"' 
perl -pi -e "s@^${SRCH}@${MSG}\n${SRCH}@" $TDIR/model.sh 
 
# Point to the temporary copy of model.sh, in run-model.sh: 
SRCH='SCRIPT=model.sh' 
perl -pi -e "s@^${SRCH}@SCRIPT=$TDIR/model.sh@" $TDIR/run-model.sh 
 
# Execute the (now modified) native process: 
$TDIR/run-model.sh ${PREFIX}-detached.out ${PREFIX}-detached.err 
 
echo "model-wrapper.sh: see modified job scripts under ${TDIR}!" 
# EOF
If you run this suite, or submit the model task alone with cylc submit, you’ll find that the usual job submission log files for task stdout and stderr end before the task is finished. To see the “model” output and the final task completion message (success or failure), examine the log files generated by the job submitted internally to the at scheduler (their location is determined by the $PREFIX variable in the suite.rc file).

It should not be difficult to adapt this example to real tasks with detaching internal job submission. You will probably also need to replace other parameters, such as model input and output filenames, with suite- and cycle-appropriate values, but exactly the same technique can be used: identify which job script needs to be modified and use text processing tools (such as the single line perl search-and-replace expressions above) to do the job.

11 Task Job Submission, Poll and Kill

 11.1 Job Poll And Kill Support
 11.2 Task Job Scripts
 11.3 Supported Job Submission Methods
 11.4 Task stdout And stderr Logs
 11.5 Overriding The Job Submission Command
 11.6 Defining New Job Submission Methods

Task Implementation (Section 10) describes what requirements a command, script, or program, must fulfill in order to function as a cylc task. This section explains how tasks are submitted by cylc when they are ready to run, and how to define new task job submission methods.

11.1 Job Poll And Kill Support

For most job submission methods cylc now supports polling for real task status, and job kill, from the gcylc GUI and command line (cylc poll and cylc kill). In addition to on-demand polling, submitted and running tasks are polled automatically on suite restart (Section 12.7) and on job submission and execution timeouts, and one-way polling can be used as regular health check for submitted tasks, and to track tasks on hosts that do not allow return routing for task messaging (Section 12).

11.1.1 Exceptions

Task poll and kill support has not yet been added to the SGE and slurm job submission methods. It will be added in an upcoming release.

11.2 Task Job Scripts

When a task is ready to run cylc generates a temporary task job script to configure the task’s execution environment and call its command scripting. The job script is the embodiment of all suite.rc runtime settings for the task. It is submitted to run by the job submission method configured for the task. Different tasks can have different job submission methods. Like other runtime properties, you can set a suite default job submission method and override it for specific tasks or families:

 
[runtime] 
   [[root]] # suite defaults 
        [[[job submission]]] 
            method = loadleveler 
   [[foo]] # just task foo 
        [[[job submission]]] 
            method = at

As shown in the Tutorial Section 7.11, job scripts are saved to the suite run directory; the commands used to submit them are printed to stdout by cylc; and they can be printed with the cylc log command or new ones generated and printed with the cylc jobscript command. Take a look at one to see exactly how cylc wraps and runs your tasks.

11.3 Supported Job Submission Methods

Cylc supports a number of commonly used job submission methods, and Section 11.6 shows how to add support for other user-defined job submission methods.

11.3.1 background

Runs tasks directly in a background shell.

11.3.2 at

Submits tasks to the rudimentary Unix at scheduler. The atd daemon must be running.

11.3.3 loadleveler

Submits tasks to loadleveler by the llsubmit command. Loadleveler directives can be provided in the suite.rc file:

 
[runtime] 
    [[__NAME__]] 
        [[[directives]]] 
            foo = bar 
            baz = qux

These are written to the top of the task job script like this:

 
#!/bin/bash 
# DIRECTIVES 
# @ foo = bar 
# @ baz = qux 
# @ queue

11.3.4 pbs

Submits tasks to PBS (or Torque) by the qsub command. PBS directives can be provided in the suite.rc file:

 
[runtime] 
    [[__NAME__]] 
        [[[directives]]] 
            -q = foo 
            -l = 'nodes=1,walltime=00:01:00'

These are written to the top of the task job script like this:

 
#!/bin/bash 
# DIRECTIVES 
#PBS -q foo 
#PBS -l nodes=1,walltime=00:01:00

11.3.5 sge

Submits tasks to Sun/Oracle Grid Engine by the qsub command. SGE directives can be provided in the suite.rc file:

 
[runtime] 
    [[__NAME__]] 
        [[[directives]]] 
            -cwd = ' ' 
            -q = foo 
            -l = 'h_data=1024M,h_rt=24:00:00'

These are written to the top of the task job script like this:

 
#!/bin/bash 
# DIRECTIVES 
#$ -cwd 
#$ -q foo 
#$ -l h_data=1024M,h_rt=24:00:00

11.3.6 slurm

Submits tasks to Simple Linux Utility for Resource Management by the sbatch command. SLURM directives can be provided in the suite.rc file (note that since not all SLURM commands have a short form, cylc requires the long form directives):

 
[runtime] 
    [[__NAME__]] 
        [[[directives]]] 
            --nodes = 5 
            --time = 1:00:00 
            --account = QXZ5W2

These are written to the top of the task job script like this:

 
#!/bin/bash 
#SBATCH --nodes=5 
#SBATCH --time=1:00:00 
#SBATCH --account=QXZ5W2

11.3.7 Default Directives Provided

For job submission methods that use job file directives (PBS, Loadlevler, etc.) default directives are provided to set the job name and stdout and stderr file paths.

11.3.8 Cylc Quirks (PBS,SGE,...)

As shown in the example above, multiple entries for the same PBS or SGE directive option must be comma-separated on the same line, in the suite.rc file. Otherwise, repeating the option on another line will override the previous entry, not add to it. Also, the right-hand side must be quoted to hide the comma from the suite.rc parser (commas indicate list values, whereas directives are treated as singular).

As also shown in the example above, to get a naked option flag such as -cwd in SGE you must give a quoted blank space as the directive value in the suite.rc file.

11.4 Task stdout And stderr Logs

When a task is ready to run cylc generates a filename root to be used for the task job script and log files. The filename containing the task name, cycle time (or integer tag), and a submit number that increments if the same task is re-triggered multiple times:

 
# task job script: 
~/cylc-run/tut.oneoff.basic/log/job/hello.1.1 
# task stdout: 
~/cylc-run/tut.oneoff.basic/log/job/hello.1.out 
# task stderr: 
~/cylc-run/tut.oneoff.basic/log/job/hello.1.err

How the stdout and stderr streams are directed into these files depends on the job submission method. The background method just uses appropriate output redirection on the command line, as shown above. The loadleveler method writes appropriate directives to the job script that is submitted to loadleveler.

Cylc obviously has no control over the stdout and stderr output from tasks that do their own internal output management (e.g. tasks that submit internal jobs and direct the associated output to other files). For less internally complex tasks, however, the files referred to here will be complete task job logs.

11.5 Overriding The Job Submission Command

To change the form of the actual command used to submit a job you do not need to define a new job submission method; just override the command template in the relevant job submission sections of your suite.rc file:

 
[runtime] 
    [[root]] 
        [[[job submission]]] 
            method = loadleveler 
            # Use '-s' to stop llsubmit returning until all job steps have completed: 
            command template = llsubmit -s %s

As explained in the suite.rc reference (Appendix A), the template’s first %s will be substituted by the job file path and, where applicable a second and third %s will be substituted by the paths to the job stdout and stderr files.

11.6 Defining New Job Submission Methods

Defining a new job submission method requires a little Python programming. You can derive (in the sense of object oriented programming inheritance) new methods from one of the existing ones, or directly from cylc’s job submission base class,

 
$CYLC_DIR/lib/cylc/job_submission/job_submit.py

using the existing job submission methods as examples.

11.6.1 An Example

The following user-defined job submission class, called qsub, overrides the built-in pbs class to change the directive prefix from #PBS to #QSUB:

 
#!/usr/bin/env python 
 
# to import from outside of the cylc source tree: 
from cylc.job_submission.pbs import pbs 
# OR, from $CYLC_DIR/lib/cylc/job_submission 
# from pbs import pbs 
 
class qsub( pbs ): 
    """ 
This is a user-defined job submission method that overrides the '#PBS' 
directive prefix of the built-in pbs method. 
    """ 
    def set_directives( self ): 
        pbs.set_directives( self ) 
        # override the '#PBS' directive prefix 
        self.directive_prefix = "#QSUB"

To check that this works correctly save the new source file to qsub.py in one of the allowed locations (see just below), use it in a suite definition:

 
# SUITE.rc 
# $HOME/test/suite.rc 
[scheduling] 
    [[dependencies]] 
        graph = "a" 
[runtime] 
    [[root]] 
        [[[job submission]]] 
            method = qsub 
        [[[directives]]] 
            -I = bar=baz 
            -l = 'nodes=1,walltime=00:01:00' 
            -cwd = ' '

and generate a job script to see the resulting directives:

 
shell$ cylc db reg test $HOME/test 
shell$ cylc jobscript test a | grep QSUB 
#QSUB -e /home/hilary/cylc-run/pbs/log/job/a.1.1.err 
#QSUB -l nodes=1,walltime=00:01:00 
#QSUB -o /home/hilary/cylc-run/pbs/log/job/a.1.1.out 
#QSUB -N a.1 
#QSUB -I bar=baz 
#QSUB -cwd

11.6.2 Where To Put New Job Submission Modules

You new job submission class code should be saved to a file with the same name as the class (plus “.py” extension). It can reside in any of the following locations, depending on how generally useful the new method is and whether or not you have write-access to the cylc source tree:

Note that the form of the import statement at the top of the new user-defined Python module differs depending on whether or not the file is installed in the cylc source tree (see the comment at the top of the example file above).

12 Running Suites

 12.1 How Tasks Interact With Running Suites
 12.2 Alternatives To Polling When Routing Is Blocked
 12.3 Task Host Communications Configuration
 12.4 How Commands Interact With Running Suites
 12.5 Connection Authentication
 12.6 How Tasks Get Access To Cylc
 12.7 Restarting Suites
 12.8 Task States
 12.9 Remote Control - Passphrases and Network Ports
 12.10 Ensemble Suites, Job Submission, and Network Timeouts
 12.11 Internal Queues And The Runahead Limit
 12.12 Automatic Task Retry On Failure
 12.13 Suite And Task Event Handling
 12.14 Reloading The Suite Definition At Runtime
 12.15 Handling Job Preemption
 12.16 Runtime Settings Broadcast and Communication Between Tasks
 12.17 The Meaning And Use Of Initial Cycle Time
 12.18 The Simulation And Dummy Run Modes
 12.19 Automated Reference Test Suites
 12.20 Triggering Off Tasks In Other Suites

To learn how to control running suites please also see the Tutorial (Section 7, command documentation (Section C), and experiment with plenty of test suites.

12.1 How Tasks Interact With Running Suites

Cylc has three ways of tracking the progress of tasks, configured per task host in the site and user config files (Section 6). All three methods can be used on different task hosts within the same suite if necessary.

  1. task-to-suite messaging: cylc job scripts encapsulate task scripting in a wrapper that automatically invokes messaging commands to report progress back to the suite. The messaging commands can be configured to work in two different ways:
    1. Pyro: direct messaging via network sockets using Pyro (Python Remote Objects).
    2. ssh: for tasks hosts that block access to the network ports required by Pyro, cylc can use passwordless ssh to re-invoke task messaging commands on the suite host (where ultimately Pyro is still used to connect to the server process).
  2. polling: for task hosts that do not allow return routing to the suite host for Pyro or ssh, cylc can poll tasks at configurable intervals, using passwordless ssh.

The Pyro communication method is the default because it is the most direct and efficient; the ssh method inserts an extra step in the process (command re-invocation on the suite host); and task polling is the least efficient because results are checked at predetermined intervals, not when task events actually occur.

12.1.1 Task Polling

Be careful to avoid spamming task hosts with polling commands. Each poll opens (and then closes) a new ssh connection. Polling subprocesses are batched by cylc, and the number invoked at once can be configured in the suite definition:

 
[cylc] 
    [[poll and kill command submission]] 
        batch size = 5  # default 10 
        delay between batches = 10 # seconds, default 0

Polling intervals are configurable here because they should be appropriate to the expected task run length. For instance, a task that typically takes an hour to run might be polled every 10 minutes initially, and then every minute toward the end of its run. Interval values are used in turn until the last value, which is used repeatedly until finished:

 
[runtime] 
    [[foo]] 
        # poll every minute in the 'submitted' state: 
        submission polling intervals = 1.0 
        # poll one minute after foo starts running, then every 10 
        # minutes for 50 minutes, then every minute until finished: 
        execution polling intervals = 1.0, 5⋆10.0, 1.0

A list of intervals with optional multipliers can be used for both submission and execution polling, although a single value is probably sufficient for submission polling. If these items are not configured default values from site and user config will be used for the polling task communication method; polling is not done by default under the other task communications methods (but it can still be used if you like).

Polling is also done automatically once on job submission and execution timeouts, to see if the timed-out task has failed or not; and on suite restarts, to see what happened to any tasks that were orphaned when the suite went down.

12.2 Alternatives To Polling When Routing Is Blocked

If Pyro and ssh ports are blocked but you don’t want to use polling from the suite host,

12.3 Task Host Communications Configuration

Here are the site and user config items relevant to task tracking:

 
#SITE AND USER CONFIG 
 
# Task messaging settings affect task-to-suite communications. 
[task messaging] 
    # If a message send fails, retry after this delay: 
    retry interval in seconds = float( min=1, default=5 ) 
    # If send fails after this many tries, give up trying: 
    maximum number of tries = integer( min=1, default=7 ) 
 
    # This timeout is the same as --pyro-timeout for user commands. If 
    # set to None (no timeout) message send to non-responsive suite 
    # (e.g. suspended with Ctrl-Z) could hang indefinitely. 
    connection timeout in seconds = float( min=1, default=30 ) 
 
# Pyro is required for communications between cylc clients and servers 
# (i.e. between suite-connecting commands and guis, and running suite 
# server processes). 
[pyro] 
 
    # Each suite listens on a dedicated network port. 
    # Servers bind on the first port available from the base port up: 
# SITE ONLY 
    base port = integer( default=7766 ) 
 
    # This sets the maximum number of suites that can run at once. 
# SITE ONLY 
    maximum number of ports = integer( default=100 ) 
 
    # Port numbers are recorded in this directory, by suite name. 
    ports directory = string( default="$HOME/.cylc/ports/" ) 
 
[hosts] 
    # The default task host is the suite host, i.e. localhost: 
    # Add task host sections if local defaults are not sufficient. 
    [[HOST]] 
       # Method of communication of task progress back to the suite: 
        #   1) pyro - direct client-server RPC via network ports 
        #   2) ssh  - re-invoke pyro messaging commands on suite server 
        #   3) poll - the suite polls for status of passive tasks 
        # Pyro RPC is still required in all cases on the suite host 
        # for cylc clients (commands etc.) to communicate with suites. 
        task communication method = option( "pyro", "ssh", "poll", default="pyro" ) 
        # The "poll" method sets a default interval here to ensure no 
        # tasks are accidentally left unpolled. You should override this 
        # with run-length appropriate intervals under task [runtime] - 
        # which will also result in routine polling to check task health 
        # under the pyro or ssh communications methods. 
        default polling interval in minutes = float( min=0.1, default=1.0 )

12.4 How Commands Interact With Running Suites

User-invoked commands that connect to running suites can also choose between direct communication across network sockets (Pyro) and re-invocation of commands on the suite host using passwordless ssh (there is a --use-ssh command option for this purpose).

The gcylc GUI requires direct Pyro connections to its target suite. If that is not possible, run gcylc on the suite host.

12.5 Connection Authentication

All Pyro connections to a running suite (task messaging and user-invoked commands) must authenticate with an arbitary single line of text in a file called passphrase, which will be found and used automatically if installed properly - see below. A secure MD5 checksum, not the raw passphrase, is passed across the network. A random passphrase is generated in the suite definition directory when a suite is registered, but you can create your own if you wish.

For ssh task messaging and user command re-invocation, on the other hand, the suite passphrase is only required on the suite host account but ssh keys must be installed for passwordless connections instead.

12.5.1 Suite Pyro Passphrase Locations

Suite passphrases currently have to be installed manually to all task host accounts that use the Pyro communication method (see above); and also to accounts used to run commands that interact directly with the suite via Pyro.

Legal passphrase locations, in order of preference, are:

  1. $CYLC_SUITE_DEF_PATH/passphrase
  2. $HOME/.cylc/SUITE_HOST/SUITE_OWNER/SUITE_NAME/passphrase
  3. $HOME/.cylc/SUITE_HOST/SUITE_NAME/passphrase
  4. $HOME/.cylc/SUITE_NAME/passphrase

Remote tasks know the location of the remote suite definition directory (if one exists) through their execution environment. Local (suite host) user command invocations can find the suite definition directory in the suite name database. Remote user command invocations, however, cannot interrogate the database on the command host because the suite will not be registered there (cylc cannot assume that the command host shares a common filesystem with the suite host). Consequently remote command host accounts must have the suite passphrase installed in one of the secondary locations under $HOME/.cylc/.

12.6 How Tasks Get Access To Cylc

Running tasks need access to cylc via $PATH, principally for the task messaging commands. To allow this, the first thing a task job script does is set $CYLC_VERSION to the cylc version number of the running suite. If you need to run several suites at once under different incompatible versions of cylc set $CYLC_VERSION in your environment to the desired version. In the case of developers wishing to run their own copy of cylc rather than a centrally installed one set $CYLC_HOME in your environment to point to your cylc copy.

12.7 Restarting Suites

A restarted suite (see cylc restart --help) is initialized from a previous recorded suite state dump so that it can carry on from wherever it got to before being shut down or killed.

Tasks that were recorded in the submitted or running states are now automatically polled on restart, to see if they are still submitted (e.g. waiting in a PBS batch queue or similar), still running, or if they finished (succeeded or failed) while the suite was down.

Tasks recorded in the failed state at shutdown are not automatically resubmitted on restarting the suite, in case the underlying problem has not been addressed yet.

12.8 Task States

As a suite runs its task proxies transition through the following states:

Note that greyed-out “base graph nodes” in the gcylc graph view do not represent task states; they are displayed to fill out the graph structure where corresponding task proxies do not currently exist in the live task pool.

For manual task state reset purposes ready is a pseudo-state that means waiting with all prerequisites satisfied.

12.9 Remote Control - Passphrases and Network Ports

Connecting to a running suite requires knowing the network port it is listening on, and the suite passphrase to authenticate with once a connection is made to the port.

Suites write their port number to $HOME/.cylc/ports/<SUITE> at start-up, and suite-connecting commands read this file to get the number.6 An exception to this is the messaging commands called by tasks. Running tasks know the port number from the execution environment provided by the suite (via the task job script).

So, to connect to a suite running on another account you must install the suite passphrase (Section 12.5.1), and configure passwordless ssh so that the port number can be retrieved from the remote port file. Then use the --user and --host command options to connect:

 
shell$ cylc monitor --user=USER --host=HOST SUITE

If you know the port number of the target suite, give it on the command line to prevent the port-retrieving ssh connection being attempted:

 
shell$ cylc monitor --user=USER --host=HOST --port=PORT SUITE

Possession of a suite passphrase gives full control over the suite, and ssh access to the port file also implies full access to the suite host account, so it is recommended that this only be used to interact with your own suites running on other hosts. We plan to implement finer-grained authentication in the future to allow suite owners to grant read-only access to others.

12.10 Ensemble Suites, Job Submission, and Network Timeouts

12.10.1 Parallel Submission Of Jobs Ready At The Same Time

Cylc now handles task job submission in a dedicated worker thread so that submission of many remote tasks at once does not impact cylc’s performance or responsiveness.

Further, for maximum efficiency, job submissions are batched inside the worker thread: batch members are submitted in parallel, and all members must complete (the job submission process, that is, not the submitted task) before the next batch is handled. There is a configurable delay between batches to avoid swamping the host system in the event that hundreds of tasks become ready at the same time:

 
[cylc] 
    [[job submission]] 
        batch size = 50 # default 10 
        delay between batches = 10 # seconds, default 0

Here a 120 task ensemble, for example, would be submitted in two batches of 50 followed by one of 20, with a 10 second delay between batches.

12.10.2 Network Connection Timeouts

A connection timeout can be set in site and user config files (see Section 6) so that messaging commands cannot hang indefinitely if the suite is not responding (thie can be caused by suspending a suite with Ctrl-Z) thereby preventing the task from completing. The same can be done on the command line for other suite-connecting user commands, with the --pyro-timeout option.

12.11 Internal Queues And The Runahead Limit

Some cylc suites have the potential to generate too much activity at once by virtue of the fact that each task cycles independently constrained only by dependence on other tasks or by clock triggers. Quick-running tasks at the top of the dependency tree with no prerequisites and no clock-triggers (or when running far behind the clock) will spawn rapidly into the future if not constrained somehow. There are two issues to be aware of here: over-burdening task host resources by submitting too many tasks at once, and over-burdening cylc itself by letting the task pool become too big (when fast tasks spawn ahead of the pack cylc has to keep them around in the succeeded state until other tasks, which may depend on them, have caught up).

12.11.1 The Suite Runahead Limit

The runahead limit prevents the fastest tasks in a suite from getting too far ahead of the slowest ones. Cylc’s cycle-interleaving abilities make for generally efficient scheduling, but there is no great advantage in letting a few fast data retrieval tasks, say, get far ahead of the slower tasks because it is typically the tasks at the bottom of the dependency tree, which necessarily run last, that generate the final products.

 
[scheduling] 
    runahead limit = 48 # hours

A cycling task spawns its successor when it enters the submitted state or, for sequential tasks, when it finishes. If a newly spawned task’s cycle time is ahead of the oldest non-finished (succeeded or failed) task by more than the runahead limit it is put into the special runahead held state until other tasks catch up sufficiently; i.e. the runahead limit constrains the number of cycles that can run at once.

The default runahead limit is normally set to twice the minimum cycling interval in the suite. For a suite with 1- and 24-hourly cylcing tasks the default limit will be 2 hours, so that two of the hourly cycles can run at once in between the 24-hourly cycles. If there are any future triggers present (graph = "foo[T+24] => bar") that extend beyond the default limit, it is adjusted up to equal the future offset plus one minimum cycling interval.

A manually set runahead limit should not stall the suite even if set to less than the minimum cycling interval, unless it does not extend out past any future triggers (see Section 9.3.4.10.

Succeeded and failed tasks are ignored when applying the runahead limit (but tasks that can’t run because they depend on a failed task are not ignored, of course).

12.11.2 Internal Queues

Large suites could potentially swamp the task host hardware or external batch queueing system, depending on the chosen job submission method, by submitting too many tasks at once. Cylc’s internal queues prevent this by limiting the number of tasks, within defined groups, that are active (submitted or running) at once.

A queue is defined by a name; a limit, which is the maximum number of active tasks allowed for the queue; and a list of member tasks, which are assigned by name to the queue.

Queue configuration is done under the [scheduling] section of the suite.rc file, not as part of the runtime namespace hierarchy, because like dependencies queues constrain when a task runs rather than what runs after it is submitted. When runtime family relationships and queues do coincide you can assign task family members en masse to queues by using the family name, as shown in the example suite listing below.

By default every task is assigned to a default queue, which by default has a zero limit (interpreted by cylc as no limit). To use a single queue for the whole suite just set the default queue limit:

 
[scheduling] 
    [[ queues]] 
        # limit the entire suite to 5 active tasks at once 
        [[[default]]] 
            limit = 5

To use other queues just name each one, set the limit, and assign member tasks:

 
[scheduling] 
    [[ queues]] 
        [[[q_foo]]] 
            limit = 5 
            members = foo, bar, baz

Any tasks not assigned to a particular queue will remain in the default queue. The queues example suite illustrates how queues work by running two task trees side by side (as seen in the graph GUI) each limited to 2 and 3 tasks respectively:

 
title = demonstrates internal queueing 
description = """ 
Two trees of tasks: the first uses the default queue set to a limit of 
two active tasks at once; the second uses another queue limited to three 
active tasks at once. Run via the graph control GUI for a clear view. 
              """ 
[scheduling] 
    [[queues]] 
        [[[default]]] 
            limit = 2 
        [[[foo]]] 
            limit = 3 
            members = n, o, p, fam2, u, v, w, x, y, z 
    [[dependencies]] 
        graph = """ 
            a => b & c => fam1:succeed-all => h & i & j & k & l & m 
            n => o & p => fam2:succeed-all => u & v & w & x & y & z 
                """ 
[runtime] 
    [[fam1,fam2]] 
    [[d,e,f,g]] 
        inherit = fam1 
    [[q,r,s,t]] 
        inherit = fam2
Note assignment of runtime task family members to queues using the family name.

12.12 Automatic Task Retry On Failure

See also Section A.4.1.9 in the Suite.rc Reference.

Tasks can be configured with a list of “retry delay” periods, in minutes, such that if a task fails it will go into a temporary retrying state and then automatically resubmit itself after the next specified delay period expires. A usage example is shown in the suite listed below under Suite And Task Event Handling, Section 12.13.

12.13 Suite And Task Event Handling

See also Sections A.2.8 and A.4.1.20 in the Suite.rc Reference.

Cylc can call nominated event handlers when certain suite or task events occur. This is intended to facilitate centralized alerting and automated handling of critical events. Event handlers can send an email or an SMS, call a pager, and so on; or intervene in the operation of their own suite using cylc commands. cylc [hook]email-suite and cylc [hook]email-task are example event handlers packaged with cylc.

Event handlers can be located in the suite bin directory, otherwise it is up to you to ensure their location is in $PATH (in the shell in which cylc runs, on the suite host).

Task event handlers are passed the following arguments by cylc:

 
<task-event-handler> EVENT SUITE TASKID MESSAGE

where EVENT is one of the following:

MESSAGE, if provided, describes what has happened, and TASKID identifies the task (NAME.CYCLE for cycling tasks).

The retry event occurs if a task fails and has any remaining retries configured (see Section 12.12). The event handler will be called as soon as the task fails, not after the retry delay period when it is resubmitted.

Note that event handlers are called by cylc itself, not by the running tasks so if you wish to pass them additional information via the environment you must use [cylc] [[environment]], not task runtime environments.

Here is an example suite that tests the retry and failed events. The handler in this case simply echoes its command line arguments to suite stdout.

 
[scheduling] 
    initial cycle time = 2010080800 
    final cycle time = 2010081000 
    [[dependencies]] 
        [[[0]]] 
            graph = "foo => bar" 
[runtime] 
    [[foo]] 
        retry delays = 0, 0.5 
        command scripting = """ 
echo TRY NUMBER: $CYLC_TASK_TRY_NUMBER 
sleep 10 
# retry twice and succeed on the final try, 
# but fail definitively in the final cycle. 
if (( CYLC_TASK_TRY_NUMBER <= 2 )) || \ 
    (( CYLC_TASK_CYCLE_TIME == CYLC_SUITE_FINAL_CYCLE_TIME )); then 
    echo ABORTING 
    /bin/false 
fi""" 
        [[[event hooks]]] 
            retry handler = "echo !!!!!EVENT!!!!! " 
            failed handler = "echo !!!!!EVENT!!!!! "

12.14 Reloading The Suite Definition At Runtime

The cylc reload command reloads the suite definition at run time. This allows: (a) changing task config such as command scripting or environment; (b) adding tasks to, or removing them from, the suite definition, at run time - without shutting the suite down and restarting it. (It is easy to shut down and restart cylc suites, but reloading may be useful if you don’t want to wait for long-running tasks to finish first).

Note that defined tasks can be already be added to or removed from a running suite with the ’cylc insert’ and ’cylc remove’ commands; the reload command allows addition and removal of task definitions. If a new task is definition is added (and used in the graph) you will still need to manually insert an instance of it (with a particular cycle time) into the running suite. If a task definition (and its use in the graph) is deleted, existing task proxies of the of the deleted type will run their course after the reload but new instances will not be spawned. Changes to a task definition will only take effect when the next task instance is spawned (existing instances will not be affected).

12.15 Handling Job Preemption

Some HPC facilities allow job preemption: the resource manager can kill or suspend running low priority jobs in order to make way for high priority jobs. The preempted jobs may then be automatically restarted by the resource manager, from the same point (if suspended) or requeued to run again from the start (if killed). If a running cylc task gets suspended or hard-killed (kill -9 <PID> is not a trappable signal so cylc cannot detect task failure in this case) and then later restarted, it will just appear to cylc as if it takes longer than normal to run. If the job is soft-killed the signal will be trapped by the task job script and a failure message sent, resulting in cylc putting the task into the failed state. When the preempted task restarts and sends its started message cylc would normally treat this as an error condition (a dead task is not supposed to be sending messages) - a warning will be logged and the task will remain in the failed state. However, if you know that preemption is possible on your system you can tell cylc that affected tasks should be resurrected from the dead, to carry on as normal if progress messages start coming in again after a failure:

 
# ... 
[runtime] 
    [[on_HPC]] 
        enable resurrection = True 
    [[TaskFoo]] 
        inherit = on_HPC 
# ...

To test this in any suite, manually kill a running task then, after cylc registers the task failed, resubmit the killed job manually by cutting-and-pasting the original job submission command from the suite stdout stream.

12.16 Runtime Settings Broadcast and Communication Between Tasks

The cylc broadcast command overrides [runtime] settings in a running suite. This can be used to communicate information to downstream tasks by broadcasting environment variables (communication of information from one task to another normally takes place via the filesystem, i.e. the input/output file relationships embodied in inter-task dependencies). Variables (and any other runtime settings) may be broadcast to all subsequent tasks, or targetted specifically at a specific task, all subsequent tasks with a given name, or all tasks with a given cycle time; see broadcast command help for details.

Broadcast settings targetted at a specific task ID or cycle time expire and are forgotten as the suite moves on. Untargetted variables and those targetted at a task name persist throughout the suite run, even across restarts, unless manually cleared using the broadcast command - and so should be used sparingly.

12.17 The Meaning And Use Of Initial Cycle Time

When a suite is started with the cylc run command (cold or warm start) the cycle time at which it starts can be given on the command line or hardwired into the suite.rc file:

 
cylc run foo 2012080806

or,

 
[scheduling] 
    initial cycle time = 2010080806

An initial cycle time given on the command line will override one in the suite.rc file.

12.17.1 The Environment Variable CYLC_SUITE_INITIAL_CYCLE_TIME

In the case of cold starts only the initial cycle time will also be passed through to task execution environments as $CYLC_SUITE_INITIAL_CYCLE_TIME. The intended use of this variable is to allow tasks to determine whether they are running in the initial cold-start cycle (when different behaviour may be required) or in a normal mid-run cycle. This is not done for warm starts because a warm start is really an implicit restart - it does not reference a particular previous suite state but it does assume that a previous cycle (for each task) has been run and completed entirely. It follows that in a warm start tasks are really in a normal mid-run cycle, and because no actual previous state is referenced $CYLC_SUITE_INITIAL_CYCLE_TIME gets the value None. After a cold-start, however, the value of the environment variable does persist across restarts because the original cold-start cycle time is stored in suite state dump files.

12.18 The Simulation And Dummy Run Modes

Since cylc-4.6.0 any cylc suite can run in live, simulation, or dummy mode. Prior to that release simulation mode was a hybrid mode that replaced real tasks with local dummy tasks. This allowed local simulation testing of any suite, to get the scheduling right without running real tasks, but running dummy tasks locally does not add much value over a pure simulation (in which no tasks are submitted at all) because all job submission configuration has to be ignored and most task job script sections have to be cut out to avoid any code that could potentially be specific to the intended task host. So at 4.6.0 we replaced this with a pure simulation mode (task proxies go through the running state automatically within cylc, and no dummy tasks are submitted to run) and a new dummy mode in which only the real task command scripting is dummied out - each dummy task is submitted exactly as the task it represents on the correct host and in the same execution environment. A successful dummy run confirms not only that the scheduling works correctly but also tests real job submission, communication from remote task hosts, and the real task job scripts (in which errors such as use of undefined variables will cause a task to fail).

The run mode, which defaults to live, is set on the command line (for run and restart):

 
shell$ cylc run --mode=dummy SUITE

but you can configure the suite to force a particular run mode,

 
[cylc] 
    force run mode = simulation

This can be used, for example, for demo suites that necessarily run out of their original context; or to temporarily prevent accidental execution of expensive real tasks during suite development.

Dummy mode task command scripting just prints a message and sleeps for ten seconds by default, but you can override this behaviour for particular tasks or task groups if you like. Here’s how to make a task sleep for twenty seconds and then fail in dummy mode:

 
[runtime] 
    [[foo]] 
        command scripting = "run-real-task.sh" 
        [[[dummy mode]]] 
            command scripting = """ 
echo "hello from dummy task $CYLC_TASK_ID" 
sleep 20 
echo "ABORTING" 
/bin/false"""

Finally, in simulation mode each task takes between 1 and 15 seconds to “run” by default, but you can also alter this for particular tasks or groups of tasks:

 
[runtime] 
    [[foo]] 
        run time range = 20,31 # (between 20 and 30 seconds) 
        command scripting = "echo ABORTING; /bin/false" # fail in dummy mode

Note that to get a failed simulation or dummy mode task to succeed on re-triggering, just change the suite.rc file appropriately and reload the suite definition at run time with cylc reload SUITE before re-triggering the task.

Dummy mode is equivalent to commenting out each task’s command scripting to expose the default scripting.

12.18.1 The Non-live-mode Accelerated Clock

In simulation and dummy mode cylc uses an accelerated clock with configurable rate and offset relative to the suite’s initial cycle time. This affects the trigger time of any clock-triggered tasks in the suite, and the length of time between cycles if simulating “caught up” operation (without this a six-hour cycling suite, for instance, would wait six hours between cycles when simulating caught-up operation, even though the simulated or dummy tasks run very quickly). By configuring the initial clock offset you can quickly simulate how suites catch up and transition from delayed to real time operation.

See Section A.2.11 for accelerated clock configuration settings.

12.18.2 Restarting Suites With A Different Run Mode?

The run mode is recorded in the suite state dump file. Cylc will not let you restart a non-live mode suite in live mode, or vice versa - any attempt to do the former would certainly be a mistake (because the simulation mode dummy tasks do not generate any of the real outputs depended on by downstream live tasks), and the latter, while feasible, would corrupt the live state dump by turning it over to simulation mode. The easiest way to test a live suite in simulation mode, if you don’t want to obliterate the current state dump by doing a cold or warm start (as opposed to a restart from the previous state) is to take a quick copy of the suite and run the copy in simulation mode. However, if you really want to run a live suite forward in simulation mode without copying it, do this:

  1. Back up the live mode suite state dump file.
  2. Edit the mode line in the state dump and restart in simulation mode.
  3. Later, restart the live suite from the restored live state dump back up.

12.19 Automated Reference Test Suites

Reference tests are finite-duration suite runs that abort with non-zero exit status if any of the following conditions occur (by default):

The default shutdown event handler for reference tests is cylc hook check-triggering which compares task triggering information (what triggers off what at run time) in the test run suite log to that from an earlier reference run, disregarding the timing and order of events - which can vary according to the external queueing conditions, runahead limit, and so on.

To prepare a reference log for a suite, run it with the --reference-log option, and manually verify the correctness of the reference run.

To reference test a suite, just run it (in dummy mode for the most comprehensive test without running real tasks) with the --reference-test option.

A battery of reference tests is used to automatically test cylc before posting a new release version. Reference tests can also be used at cylc upgrade time to check that the upgrade will not break your own complex suites - the triggering check will catch any bug that causes a task to run when it shouldn’t, for instance; even in a dummy mode reference test the full task job script (sans real command scripting) has to execute successfully on the proper task host by the proper job submission method.

Reference tests can be configured with the following settings:

 
[cylc] 
    [[reference test]] 
        suite shutdown event handler = cylc check-triggering 
        required run mode = dummy 
        allow task failures = False 
        live mode suite timeout = 5 # minutes 
        dummy mode suite timeout = 2 
        simulation mode suite timeout = 2

12.19.1 Roll-your-own Reference Tests

If the default reference test is not sufficient for your needs, firstly note that you can override the default shutdown event handler, and secondly that the --reference-test option is merely a short cut to the following suite.rc settings which can also be set manually if you wish:

 
[cylc] 
    abort if any task fails = True 
    [[event hooks]] 
        shutdown handler = cylc check-triggering 
        timeout = 5 
        abort if shutdown handler fails = True 
        abort on timeout = True

12.20 Triggering Off Tasks In Other Suites

The cylc suite-state command, which interrogates suite run databases, has a polling mode that waits on a given task achieving a given state. See cylc suite-state --help for command options and defaults.

The suite graph notation also allows you to define local tasks that, in effect, represent tasks in other suites by automatically polling for them using the cylc suite-state command. Here’s how to trigger a task bar off a task foo in another suite called other.suite:

 
[scheduling] 
    [[ dependencies]] 
        [[[0,12]]] 
            graph = "FOO<other.suite::foo> => bar"

Local task FOO will poll for the success of foo in suite other.suite at the same cycle time. Other task states can be polled like this,

 
   graph = "FOO<other.suite::foo:fail> => bar"

Default polling parameters (the maximum number of polls and the interval between them) are printed by cylc suite-state --help. These can be configured if necessary under the local polling task runtime section:

 
[scheduling] 
    [[ dependencies]] 
        [[[0,12]]] 
            graph = "FOO<other.suite::foo> => bar" 
[runtime] 
    [[FOO]] 
        [[[suite state polling]]] 
            max-polls = 100 
            interval = 10 # seconds

The remote suite does not have to be running when polling commences (or at all if the remote condition has already been achieved) because the command interrogates the suite run database, not the suite server process.

For suites owned by others or those with run databases in non-standard locations use the --run-dir option or, in-suite,

 
[runtime] 
    [[FOO]] 
        [[[suite state polling]]] 
            run-dir = /path/to/top/level/cylc/run-directory

To trigger off remote tasks with different cycle times just arrange for the local polling task to be on the same cycling sequence as the remote task that it represents. For instance, if local task cat cycles 6-hourly at 0,6,12,18 but needs to trigger off a remote task dog with cycle times of 3,9,15,21 hours,

 
[scheduling] 
    [[ dependencies]] 
        [[[0,6,12,18]]] 
            graph = "DOG<other.suite::dog>[T-3] => cat"

This results in DOG having cycle times of 3,9,15,21 - the sames as dog in other.suite.

13 Other Topics In Brief

The following topics have yet to be documented in detail.

14 Suite Storage, Discovery, Revision Control, and Deployment

 14.1 Rose

Small groups of cylc users can of course share suites by manual copying, and generic revision control tools can be used on cylc suites as for any collection of files. Beyond this cylc does not have a built-in solution for suite storage and discovery, revision control, and deployment, on a network. That is not cylc’s core purpose, and large sites may have preferred revision control systems and suite meta-data requirements that are difficult to anticipate. We can, however, recommend the use of Rose to do all of this very easily and elegantly with cylc suites.

14.1 Rose

Rose is a framework for managing and running suites of scientific applications, developed at the UK Met Office for use with cylc. It is available under the open source GPL license.

15 Suite Design Principles

 15.1 Make Fine-Grained Suites
 15.2 Make Tasks Rerunnable
 15.3 Make Models Rerunnable
 15.4 Limit Previous-Instance Dependence
 15.5 Put Task Cycle Time In All Output File Paths
 15.6 How To Manage Input/Output File Dependencies
 15.7 Use Generic Task Scripts
 15.8 Make Suites Portable
 15.9 Make Tasks As Self-Contained As Possible
 15.10 Make Suites As Self-Contained As Possible
 15.11 Orderly Product Generation?
 15.12 Clock-triggered Tasks Wait On External Data
 15.13 Do Not Treat Real Time Operation As Special
 15.14 Factor Out Common Configuration
 15.15 Use The Graph For Scheduling
 15.16 Use Suite Visualization

15.1 Make Fine-Grained Suites

A suite can contain a small number of large, internally complex tasks; a large number of small, simple tasks; or anything in between. Cylc can easily handle a large number of tasks, however, so there are definite advantages to fine-graining:

15.2 Make Tasks Rerunnable

It should be possible to rerun a task by simply resubmitting it for the same cycle time. In other words, failure at any point during execution of a task should not render a rerun impossible by corrupting the state of some internal-use file, or whatever. It is difficult to overstate the usefulness of being able to rerun the same task multiple times, either outside of the suite with cylc submit, or by retriggering it within the running suite, when debugging a problem.

15.3 Make Models Rerunnable

If a warm-cycled model simply overwrites its restart files in each run, the only cycle that can subsequently run is the next one. This is dangerous because if, accidentally or otherwise, the task runs for the wrong cycle time, its restart files will be corrupted such that the correct cycle can no longer run (probably necessitating a cold-start). Instead, consider organising restart files by cycle time, through a file or directory naming convention, and keep them in a simple rolling archive (cylc’s filename templating and housekeeping utilities can easily do this for you). Then, given availability of external inputs, you can easily rerun the task for any cycle still in the restart archive.

15.4 Limit Previous-Instance Dependence

Cylc does not require that successive instances of the same task run sequentially. In order to task advantage of this and achieve maximum functional parallelism whenever the opportunity arises (usually when catching up from a delay) you should ensure that tasks that in principle do not depend on their own previous instances (the vast majority of tasks in most suites, in fact) do not do so in practice. In other words, they should be able to run as soon as their prerequisites are satisfied regardless of whether or not their predecessors have finished yet. This generally just means ensuring that all file I/O contains the generating task’s cycle time in the file or directory name so that there is no interference between successive instances. If this is difficult to achieve in particular cases, however, you can declare the offending tasks to be sequential.

15.5 Put Task Cycle Time In All Output File Paths

Having all filenames, or perhaps the names of their containing directories, stamped with the cycle time of the generating task greatly aids in managing suite disk usage, both for archiving and cleanup. It also enables the aforementioned task rerunnability recommendation by avoiding overwrite of important files from one cycle to the next. Cylc has powerful utilities for cycle time offset filename templating and housekeeping.

15.5.1 Use Cylc Cycle Time Filename Templating

The command line utility program cylc [util] cycletime computes offsets (in hours, days, months, and years) from a given or current (in the environment) cycle time, and optionally inserts the resulting computed cycle time, or components of it, into a given template string containing “YYYY” as a placeholder for the year value, “MM” for month, and so on. This can be used in the suite.rc environment or command scripting sections, or in task implementation scripting, to generate filenames containing the current cycle time (or some offset from it) for use by tasks.

See cylc [util] cycletime --help for examples.

15.6 How To Manage Input/Output File Dependencies

Dependencies between tasks usually, though not always, take the form of files generated by one task that are used by other tasks. It is possible to manage these files across a suite without hard wiring I/O locations and therefore compromising suite flexibility and portability.

15.7 Use Generic Task Scripts

If your suite contains multiple logically distinct tasks that actually have similar functionality (e.g. for moving files around, or for generating similar products from the output of several similar models) have the corresponding cylc tasks all call the same command, script, or executable - just provide different input parameters via the task command scripting and/or execution environment, in the suite.rc file.

15.8 Make Suites Portable

If every task in a suite is configured to put its output under $HOME (i.e. the environment variable, literally, not the explicit path to your home directory; and similarly for temporary directories, etc.) then other users will be able to copy the suite and run it immediately, after merely ensuring that any external input files are in the right place.

For the ultimate in portability, construct suites in which all task I/O paths are dynamically configured to be user and suite (registration) specific, e.g.

 
$HOME/output/$CYLC_SUITE_REG_PATH

(these variables are automatically exported to the task execution environment by cylc - see Task Execution Environment, Section 9.4.7). Then you can run multiple instances of the suite at once (even under the same user account) without changing anything, and they will not interfere with each other.

You can test changes to a portable suite safely by making a quick copy of it in a temporary directory, then modifying and running the test copy without fear of corrupting the output directories, suite logs, and suite state, of the original.

15.9 Make Tasks As Self-Contained As Possible

Where possible, no task should rely on the action of another task, except for the prerequisites embodied in the suite dependency graph that it has no choice but to depend on. If this rule is followed, your suite will be as flexible as possible in terms of being able to run single tasks, or subsets of the suite, whilst debugging or developing new features.7 For example, every task should create its own output directories if they do not already exist, instead of assuming their existence due to the action of some other task; then you will be able to run single tasks without having to manually create output directories first.

 
# manual task scripting: 
  # 1/ create $OUTDIR if it doesn't already exist: 
  mkdir -p $OUTDIR 
  # 2/ create the parent directory of $OUTFILE if it doesn't exist: 
  mkdir -p $( dirname $OUTFILE ) 
 
# OR using the cylc checkvars utility: 
  # 1/ check vars are defined, and create directories if necessary: 
  cylc util checkvars -c OUTDIR1 OUTDIR2 #... 
  # 2/ check vars are defined, and create parent dirs if necessary: 
  cylc util checkvars -p OUTFILE1 OUTFILE2 #...

15.10 Make Suites As Self-Contained As Possible

The only compulsory content of a cylc suite definition directory is the suite.rc file. However, you can store whatever you like in a suite definition directory;8 other files there will be ignored by cylc but suite tasks can access them via the $CYLC_SUITE_DEF_PATH variable that cylc automatically exports into the task execution environment. Disk space is cheap - if all programs, ancillary files, control files (etc.) required by the suite are stored in the suite definition directory instead of having the suite reference external build directories (etc.), you can turn the directory into a revision control repository and be virtually assured of the ability to exactly reproduce earlier versions, regardless of suite complexity.

15.11 Orderly Product Generation?

Correct scheduling is not equivalent to “orderly generation of products by cycle time”. Under cylc, a product generation task will trigger as soon as its prerequisites are satisfied (i.e. when its input files are ready, generally) regardless of whether other tasks with the same cycle time have finished or have yet to run. If your product delivery or presentation system demands that all products for one cycle time are uploaded (or whatever) before any from the next cycle, then be aware that this may be quite inefficient if your suite is ever faced with catching up from a significant delay or running over historical data.

If you must, however, you can introduce artificial dependencies into your suite to ensure that the final products never arrive out of sequence. One way of doing this would be to have a final “product upload” task that depends on completion of all the real product generation tasks at the same cycle time, and then declare it to be sequential.

15.12 Clock-triggered Tasks Wait On External Data

All tasks in a cylc suite know their own private cycle time, but most don’t care about the wall clock time - they just run when their prerequisites are satisfied. The exception to this is clock-triggered tasks, which wait on a wall clock time expressed as an offset from their own cycle time, in addition to any other prerequisites. The usual purpose of these tasks is to retrieve real time data from the external world, triggering at roughly the expected time of availability of the data. Triggering the task at the right time is up to cylc, but the task itself should go into a check-and-wait loop in case the data is delayed; only on successful detection or retrieval should the task report success and then exit (or perhaps report failure and then exit if the data has not arrived by some cutoff time).

15.13 Do Not Treat Real Time Operation As Special

Cylc suites, without modification, can handle real time and delayed operation equally well.

In real time operation clock-triggered tasks constrain the behaviour of the whole suite, or at least of all tasks downstream of them in the dependency graph.

In delayed operation (whether due to an actual delay in an operational suite or because you’re running an historical trial) clock-triggered tasks will not constrain the suite at all, and cylc’s cycle interleaving abilities come to the fore, because their trigger times have already passed. But if a clock-triggered task happens to catch up to the wall clock, it will automatically wait again. In this way a cylc suite naturally and seamlessly transitions between delayed and real time operation as required.

15.14 Factor Out Common Configuration

Properties shared by multiple tasks (job submission settings, environment variables, command scripting, etc.) should ideally be defined only once. Cylc supports several ways of achieving this:

Multiple inheritance is very efficient when tasks share many properties. Jinja variables are more efficient when single items are shared by just a few tasks that don’t have anything else in common (e.g. an environment variable for the location of a shared file).

For environment variables in particular it may be tempting to define all variables for all tasks once under [root], but this is somewhat analagous to overuse of global variables in programming and it can make it difficult to determine which variables matter to which tasks. Environment filters (Section A.4.1.22) can be used to make this safer, however.

Finally, Jinja2 can also be used to avoid defining intermediate environment variables for the sole purpose of deriving other environment variables at task run time. Instead of this:

 
[runtime] 
    [[root]] 
        [[[environment]]] 
            OUTPUT_DIR=/my/top/outputdir 
    [[foo]] 
        [[[environment]]] 
            FOO_OUTPUT_DIR=$OUTPUT_DIR/foo 
            BAR_OUTPUT_DIR=$OUTPUT_DIR/bar

do this:

 
{% set OUTPUT_DIR = "/my/top/outputdir" %} 
[runtime] 
    [[foo]] 
        [[[environment]]] 
            FOO_OUTPUT_DIR={{ OUTPUT_DIR }}/foo 
            BAR_OUTPUT_DIR={{ OUTPUT_DIR }}/bar

If the values of these Jinja2 variables are needed in external scripts, just translate them directly in environment sections:

 
    [[[environment]]] 
        OUTPUT_DIR = {{ OUTPUT_DIR }}

15.15 Use The Graph For Scheduling

If you find yourself writing runtime scripting to get a task to change its behaviour significantly from one cycle to the next, consider that the graph is usually the proper place to express this sort of thing. Use different task names, but have them inherit common properties from a family namespace to avoid duplication. Instead of this:

 
[scheduling] 
    [[dependencies]] 
        [[[0,6,12,18]]] 
            graph = "foo => bar => baz" 
[runtime] 
    [[bar]] 
        command scripting = """ 
if [[ $( cylc cycletime --print-hour ) == 06 || \ 
      $( cylc cycletime --print-hour ) == 18 ]]; then 
    SENTENCE="the quick brown fox" 
else 
    SENTENCE="the lazy dog" 
fi 
echo $SENTENCE""" 
        # (...other config...)

do this:

 
[scheduling] 
    [[dependencies]] 
        [[[0,12]]] 
            graph = "foo => bar_a => baz" 
        [[[6,18]]] 
            graph = "foo => bar_b => baz" 
[runtime] 
    [[BAR]] 
        # (... other config...) 
        command scripting = "echo $SENTENCE" 
    [[bar_a]] 
        inherit = BAR 
        [[[environment]]] 
            SENTENCE = "the quick brown fox" 
    [[bar_b]] 
        inherit = BAR 
        [[[environment]]] 
            SENTENCE = "the lazy dog"

15.16 Use Suite Visualization

Effective visualization can make complex suites easier to understand. Collapsible task families for visualization are defined by the first parents in the runtime namespace hierarchy. Tasks should generally be grouped into visualization families that reflect their purpose within the structure of the suite rather than technical detail such as common job submission method or task host. This often coincides nicely with common configuration inheritance requirements, but if it doesn’t you can use an empty namespace as a first parent for visualization:

 
[runtime] 
    [[OBSPROC]] 
    [[obs1, obs2, obs3]] 
        inherit = OBSPROC

and you can demote parents from primary to secondary:

 
[runtime] 
    [[HOSTX]] 
        # common settings for tasks on host HOSTX 
    [[foo]] 
        inherit = None, HOSTX

16 Style Guide

 16.1 Line Indentation
 16.2 Comments
 16.3 Other

Good style is arguably just a matter of taste. That said, for collaborative development of complex systems it is important to settle on a clear and consistent style. You may find the following suggestions useful.

16.1 Line Indentation

The suite.rc file format consists of item = value pairs under nested section headings. Clear indentation is the best way to show local nesting level inside large blocks.

16.2 Comments

16.3 Other

A Suite.rc Reference

 A.1 Top Level Items
 A.2 [cylc]
 A.3 [scheduling]
 A.4 [runtime]
 A.5 [visualization]
 A.6 Special Placeholder Variables In Suite Definitions
 A.7 Default Suite Configuration

This appendix defines all legal suite definition config items. Embedded Jinja2 code (see Section 9.6) must process to a valid raw suite.rc file. See also Section 9.2 for a descriptive overview of suite.rc files, including syntax (Section 9.2.1).

A.1 Top Level Items

The only top level configuration items at present are the suite title and description.

A.1.1 title

A single line description of the suite. It is displayed in the db viewer window and can be retrieved at run time with the cylc show command.

A.1.2 description

A multi-line description of the suite. It can be retrieved by the db viewer right-click menu, or at run time with the cylc show command.

A.2 [cylc]

This section is for configuration that is not specifically task-related.

A.2.1 [cylc] required run mode

If this item is set cylc will abort if the suite is not started in the specified mode. This can be used for demo suites that have to be run in simulation mode, for example, because they have been taken out of their normal operational context; or to prevent accidental submission of expensive real tasks during suite development.

A.2.2 [cylc] UTC mode

Cylc runs off the suite host’s system clock by default. This item allows you to run the suite in UTC even if the system clock is set to local time. Clock-triggered tasks will trigger when the current UTC time is equal to their cycle time plus offset; other time values used, reported, or logged by cylc will also be in UTC.

A.2.3 [cylc] abort if any task fails

Cylc does not normally abort if tasks fail, but if this item is turned on it will abort with exit status 1 if any task fails.

A.2.4 [cylc] log resolved dependencies

If this is turned on cylc will write the resolved dependencies of each task to the suite log as it becomes ready to run (a list of the IDs of the tasks that actually satisfied its prerequisites at run time). Mainly used for cylc testing and development.

A.2.5 [cylc] [[job submission]]

Tasks ready to submit are now queued for processing in a background worker thread, so submitting a lot of tasks at once does not hold cylc back. In the job submission thread tasks are are batched, with members of each batch being submitted in parallel. Batches are processed serially, with a delay between batches, to avoid swamping the host system with too many simultaneous job submissions.

The time required for a single task’s job submission to complete typically depends on whether it is a remote task (for which an ssh connection must be established and used) and whether dynamic host selection is used (see A.4.1.19.1 (a dynamic host selection command runs as part of the job submission command). The time taken for a batch of parallel job submissions to complete will be roughly the duration of the slowest member process.

[cylc] [[job submission]] batch size The maximum number of tasks to be submitted in a single batch, in the job submission thread. Cylc waits for all batch member job-submissions to complete before proceeding to the next batch.

[cylc] [[job submission]] delay between batches It may cause a problem for some batch queue schedulers to submit too many jobs at once, so cylc allows a configurable delay between job submission batches.

A.2.6 [cylc] [[poll and kill command submission]]

Task poll and kill commands are queued to a worker thread that processes them in parallel, in batches to limit the number that can execute at once.

[cylc] [[poll and kill command submission]] batch size The maximum number of poll and kill commands to execute at once, before moving on to the next batch.

[cylc] [[poll and kill command submission]] delay between batches How long to wait, in seconds, before processing the next batch of poll and kill commands.

A.2.7 [cylc] [[event handler submission]]

Task event handlers are queued to a worker thread that processes them in parallel, in batches to limit the number that can execute at once (suite event handlers, on the other hand, are executed as background sub-processes in the main thread, not queued to the task event handler thread).

[cylc] [[event handler submission]] batch size The maximum number of event handlers to execute at once, before moving on to the next batch.

[cylc] [[event handler submission]] delay between batches How long to wait, in seconds, before processing the next batch of event handlers.

A.2.8 [cylc] [[event hooks]]

Cylc has internal “hooks” to which you can attach handlers that are called by cylc whenever certain events occur. This section configures suite event hooks; see Section A.4.1.20 for task event hooks.

Event handlers can send an email or an SMS, call a pager, intervene in the operation of their own suite, or whatever. They can be held in the suite bin directory, otherwise it is up to you to ensure their location is in $PATH (in the shell in which cylc runs, on the suite host). cylc [hook] email-suite is a simple suite event handler.

Suite event handlers are called by cylc with the following arguments:

 
<suite-event-handler> EVENT SUITE MESSAGE

where,

Additional information can be passed to event handlers via [cylc] [[environment]].

[cylc] [[event hooks]] EVENT handler Specify a handler script to call when one of the following EVENTs occurs:

Item details:

[cylc] [[event hooks]] timeout If a timeout is set and the timeout event is handled, the timeout event handler will be called if the suite times out before it finishes. The timer is set initially at suite start up.

[cylc] [[event hooks]] reset timer If True (the default) the suite timer will continually reset after any task changes state, so you can time out after some interval since the last activity occured rather than on absolute suite execution time.

[cylc] [[event hooks]] abort on timeout If a suite timer is set (above) this will cause the suite to abort with error status if the suite times out while still running.

[cylc] [[event hooks]] abort if EVENT handler fails Cylc does not normally care whether an event handler succeeds or fails, but if this is turned on the EVENT handler will be executed in the foreground (which will block the suite while it is running) and the suite will abort if the handler fails.

A.2.9 [cylc] [[lockserver]]

The cylc lockserver brokers suite and task locks on the network (these are somewhat analagous to traditional local lock files). It prevents multiple instances of a suite or task from being invoked at the same time (via scheduler instances or cylc submit).

See cylc lockserver --help for how to run the lockserver, and cylc lockclient --help for occasional manual lock management requirements.

[cylc] [[lockserver]] enable The lockserver is currently disabled by default. It is intended mainly for operational use.

[cylc] [[lockserver]] simultaneous instances By default the lockserver prevents multiple simultaneous instances of a suite from running even under different registered names. But allowing this may be desirable if the I/O paths of every task in the suite are dynamically configured to be suite specific (and similarly for the suite state dump and logging directories, by using suite identity variables in their directory paths). Note that the lockserver cannot protect you from running multiple distinct copies of a suite simultaneously.

A.2.10 [cylc] [[environment]]

Variables defined here are exported into the environment in which cylc itself runs. They are then available to local processes spawned directly by cylc. Any variables read by task event handlers must be defined here, for instance, because event handlers are executed directly by cylc, not by running tasks. And similarly the command lines issued by cylc to invoke event handlers or to submit task job scripts could, in principle, make use of environment variables defined here.

Warnings

[cylc] [[environment]] __VARIABLE__ Replace __VARIABLE__ with any number of environment variable assignment expressions. Values may refer to other local environment variables (order of definition is preserved) and are not evaluated or manipulated by cylc, so any variable assignment expression that is legal in the shell in which cylc is running can be used (but see the warning above on variable expansions, which will not be evaluated). White space around the ‘=’ is allowed (as far as cylc’s suite.rc parser is concerned these are normal configuration items).

A.2.11 [cylc] [[accelerated clock]]

Accelerated clock settings, used to speed up the wait between cycles in the simulation and dummy run modes.

[cylc] [[accelerated clock]] disable Disabling the accelerated clock makes the suite (and its log time stamps etc.) run on real time. Note that if the suite has clock-triggered tasks that catch up to the wall clock, the interval between cycles will also be in real time - e.g. six hours for a six hourly cycle.

[cylc] [[accelerated clock]] rate The rate at which the accelerated clock runs in real seconds per simulated hour.

[cylc] [[accelerated clock]] offset The clock offset determines the initial time on the accelerated clock, at suite startup, relative to the initial cycle time. An offset of 0 simulates real time operation; greater offsets simulate catch up from a delay and subsequent transition to real time operation.

A.2.12 [cylc] [[reference test]]

Reference tests are finite-duration suite runs that abort with non-zero exit status if cylc fails, if any task fails, if the suite times out, or if a shutdown event handler that (by default) compares the test run with a reference run reports failure. See Automated Reference Test Suites, Section 12.19.

[cylc] [[reference test]] suite shutdown event handler A shutdown event handler that should compare the test run with the reference run, exiting with zero exit status only if the test run verifies.

As for any event handler, the full path can be ommited if the script is located somewhere in $PATH or in the suite bin directory.

[cylc] [[reference test]] required run mode If your reference test is only valid for a particular run mode, this setting will cause cylc to abort if a reference test is attempted in another run mode.

[cylc] [[reference test]] allow task failures A reference test run will abort immediately if any task fails, unless this item is set, or a list of expected task failures is provided (below).

[cylc] [[reference test]] expected task failures A reference test run will abort immediately if any task fails, unless allow task failures is set (above) or the failed task is found in a list IDs of tasks that are expected to fail.

[cylc] [[reference test]] live mode suite timeout The timeout value in minutes after which the test run should be aborted if it has not finished, in live mode. Test runs cannot be done in live mode unless you define a value for this item, because it is not possible to arrive at a sensible default for all suites.

[cylc] [[reference test]] simulation mode suite timeout The timeout value in minutes after which the test run should be aborted if it has not finished, in simulation mode. Test runs cannot be done in simulation mode unless you define a value for this item, because it is not possible to arrive at a sensible default for all suites.

[cylc] [[reference test]] dummy mode suite timeout The timeout value in minutes after which the test run should be aborted if it has not finished, in dummy mode. Test runs cannot be done in dummy mode unless you define a value for this item, because it is not possible to arrive at a sensible default for all suites.

A.3 [scheduling]

This section allows cylc to determine when tasks are ready to run.

A.3.1 [scheduling] initial cycle time

At startup each cycling task (unless specifically excluded under [special tasks]) will be inserted into the suite with this cycle time, or with the closest subsequent valid cycle time for the task. Note that whether or not cold-start tasks, specified under [special tasks], are inserted, and in what state they are inserted, depends on the start up method - cold, warm, or raw. If this item is provided you can override it on the command line or in the gcylc suite start panel.

A.3.2 [scheduling] final cycle time

Cycling tasks are held once they pass the final cycle time, if one is specified. Once all tasks have achieved this state the suite will shut down. If this item is provided you can override it on the command line or in the gcylc suite start panel.

A.3.3 [scheduling] runahead limit

The suite runahead limit prevents the fastest tasks in a suite from getting too far ahead of the slowest ones, as documented in Section 12.11.1. Tasks exceeding the limit are put into a special runahead held state until slower tasks have caught up sufficiently.

A.3.4 [scheduling] [[queues]]

Configuration of internal queues, by which the number of simultaneously active tasks (submitted or running) can be limited, per queue. By default a single queue called default is defined, with all tasks assigned to it and no limit. To use a single queue for the whole suite just set the limit on the default queue as required. See also Section 12.11.2.

[scheduling] [[queues]] [[[__QUEUE__]]] Section heading for configuration of a single queue. Replace __QUEUE__ with a queue name, and repeat the section as required.

[scheduling] [[queues]] [[[__QUEUE__]]] limit The maximum number of active tasks allowed at any one time, for this queue.

[scheduling] [[queues]] [[[__QUEUE__]]] members A list of member tasks, or task family names, to assign to this queue (assigned tasks will automatically be removed from the default queue).

A.3.5 [scheduling] [[special tasks]]

This section is used to identify any tasks with several kinds of special behaviour. By default (i.e. non “special” behaviour) tasks submit (or queue) as soon as their prerequisites are satisfied, and they spawn a successor as soon as they enter the submitted state.9 Family names used here are interpreted purely as shorthand for the list of all member tasks. A sequential family, therefore, is a family of sequential tasks, not a family that behaves “sequentially” as a whole.

[scheduling] [[special tasks]] clock-triggered Clock-triggered tasks wait on a wall clock time specified as an offset in hours relative to their own cycle time, in addition to any dependence they have on other tasks. Generally speaking, only tasks that wait on external real time data need to be clock-triggered. Note that in computing the trigger time the full wall clock time and cycle time are compared, not just hours and minutes of the day, so when running a suite in catchup/delayed operation, or over historical periods, clock-triggered tasks will not constrain the suite at all until they catch up to the wall clock.

[scheduling] [[special tasks]] start-up Start-up tasks are one-off tasks (they do not spawn a successor) that only run in the first cycle (and only in a cold-start) and any dependence on them is ignored in subsequent cycles. They can be used to prepare a suite workspace, for example, before other tasks run. Start-up tasks cannot appear in conditional trigger expressions with normal cycling tasks, because the meaning of the conditional expression becomes undefined in subsequent cycles.

[scheduling] [[special tasks]] cold-start A cold-start task is one-off task used to satisfy the dependence of an associated task with the same cycle time, on outputs from a previous cycle - when those outputs are not available. The primary use for this is to cold-start a warm-cycled forecast model that normally depends on restart files (e.g. model background fields) generated by its previous forecast, when there is no previous forecast. This is required when cold-starting the suite, but cold-start tasks can also be inserted into a running suite to restart a model that has had to skip some cycles after running into problems. Cold-start tasks can invoke real cold-start processes, or they can just be dummy tasks that represent some external process that has to be completed before the suite is started. Unlike start-up tasks, dependence on cold-start tasks is preseverved in subsequent cycles so they must typically be used in OR’d conditional expressions to avoid holding up the suite.

[scheduling] [[special tasks]] sequential By default, a task spawns a successor as soon as it is submitted to run so that successive instances of the same task can run in parallel if the opportunity arises (i.e. if their prerequisites happen to be satisfied before their predecessor has finished). Sequential tasks, however, will not spawn a successor until they have finished successfully. This should be used for (a) tasks that cannot run in parallel with their own previous instances because they would somehow interfere with each other (use cycle time in all I/O paths to avoid this); and (b) warm cycled forecast models that write out restart files for multiple cycles ahead (exception: see “explicit restart outputs” below).10

[scheduling] [[special tasks]] one-off Synchronous one-off tasks have an associated cycle time but do not spawn a successor. Synchronous start-up and cold-start tasks are automatically one-off tasks and do not need to be listed here. Dependence on one-off tasks is not restricted to the first cycle.

[scheduling] [[special tasks]] explicit restart outputs This is only required in the event that you need a warm cycled forecast model to start at the instant its restart files are ready (if other prerequisites are satisfied) even if its previous instance has not finished yet. If so, the model task has to depend on special output messages emitted by the previous instance as soon as its restart files are ready, instead of just on the previous instance finishing. Tasks in this category must define special restart output messages containing the word “restart”, in [runtime] [[TASK]] [[[outputs]]] - see Section 10.3.

[scheduling] [[special tasks]] exclude at start-up Any task listed here will be excluded from the initial task pool (this goes for suite restarts too). If an inclusion list is also specified, the initial pool will contain only included tasks that have not been excluded. Excluded tasks can still be inserted at run time. Other tasks may still depend on excluded tasks if they have not been removed from the suite dependency graph, in which case some manual triggering, or insertion of excluded tasks, may be required.

[scheduling] [[special tasks]] include at start-up If this list is not empty, any task not listed in it will be excluded from the initial task pool (this goes for suite restarts too). If an exclusion list is also specified, the initial pool will contain only included tasks that have not been excluded. Excluded tasks can still be inserted at run time. Other tasks may still depend on excluded tasks if they have not been removed from the suite dependency graph, in which case some manual triggering, or insertion of excluded tasks, may be required.

A.3.6 [scheduling] [[dependencies]]

The suite dependency graph is defined under this section. You can plot the dependency graph as you work on it, with cylc graph or by right clicking on the suite in the db viewer. See also Section 9.3.

[scheduling] [[dependencies]] graph The dependency graph for any one-off asynchronous (non-cycling) tasks in the suite goes here. This can be used to construct a suite of one-off tasks (e.g. build jobs and related processing) that just completes and then exits, or an initial suite section that completes prior to the cycling tasks starting (if you make the first cycling tasks depend on the last one-off ones). But note that synchronous start-up tasks can also be used for the latter purpose. See Section A.3.6.2.1 below for graph string syntax, and Section 9.3.

[scheduling] [[dependencies]] [[[__VALIDITY__]]] __VALIDITY__ section headings define the sequence of cycle times for which the subsequent graph section is valid. For cycling tasks use a comma-separated list of integer hours, 0 H 23 for the original hours-of-the-day cycling, or reference a particular stepped daily, monthly, or yearly cycling module:

For repeating asynchronous tasks put ‘ASYNCID:pattern’ in the section heading, where pattern is a regular expression that matches an asynchronous task ID:

See Section 9.3.3, Graph Types for the meaning of the stepped cycler arguments, how multiple graph sections combine within a single suite, and so on.

[scheduling] [[dependencies]] [[[__VALIDITY__]]] graph The dependency graph for the specified validity section (described just above) goes here. Syntax examples follow; see also Sections 9.3 (Configuring Scheduling) and 9.3.4 (Trigger Types).

[scheduling] [[dependencies]] [[[__VALIDITY__]]] daemon For [[[ASYNCID:pattern]]] validity sections only, list asynchronous daemon tasks by name. This item is located here rather than under [scheduling] [[special tasks]] because a damon task is associated with a particular asynchronous ID.

A.4 [runtime]

This section is used to specify how, where, and what to execute when tasks are ready to run. Common configuration can be factored out in a multiple-inheritance hierarchy of runtime namespaces that culminates in the tasks of the suite. Order of precedence is determined by the C3 linearization algorithm as used to find the method resolution order in Python language class hiearchies. For details and examples see Section 9.4, Runtime Properties.

A.4.1 [runtime] [[__NAME__]]

Replace __NAME__ with a namespace name, or a comma separated list of names, and repeat as needed to define all tasks in the suite. Names may contain letters, digits, underscores, and hyphens. A namespace represents a group or family of tasks if other namespaces inherit from it, or a task if no others inherit from it.

If multiple names are listed the subsequent settings apply to each.

All namespaces inherit initially from root, which can be explicitly configured to provide or override default settings for all tasks in the suite.

[runtime] [[__NAME__]] inherit A list of the immediate parent(s) this namespace inherits from. If no parents are listed root is assumed.

[runtime] [[__NAME__]] title A single line description of this namespace. It is displayed by the cylc list command and can be retrieved from running tasks with the cylc show command.

[runtime] [[__NAME__]] description A multi-line description of this namespace, retrievable from running tasks with the cylc show command.

[runtime] [[__NAME__]] initial scripting Initial scripting is executed at the top of the task job script just before the cylc task started message call is made, and before the task execution environment is configured - so it does not have access to any suite or task environment variables. The original intention was to allow remote tasks to source login scripts before calling the first cylc command, e.g. to set $PYTHONPATH if Pyro has been installed locally. Note however that the remote task invocation mechanism now automatically sources both /etc/profile and $HOME/.profile if they exist. For other uses pre-command scripting should be used if possible because it can has access to the task execution environment.

[runtime] [[__NAME__]] environment scripting Environment scripting is inserted into the task job script between the cylc-defined environment (suite and task identity, etc.) and the user-defined task runtime environment - i.e. it has access to the cylc environment, and the task environment has access to the results of this scripting.

[runtime] [[__NAME__]] command scripting The scripting to execute when the associated task is ready to run - this can be a single command or multiple lines of scripting.

[runtime] [[__NAME__]] pre-command scripting Scripting to be executed immediately before the command scripting. This would typically be used to add scripting to every task in a family (for individual tasks you could just incorporate the extra commands into the main command scripting). See also post-command scripting, below.

[runtime] [[__NAME__]] post-command scripting Scripting to be executed immediately after the command scripting. This would typically be used to add scripting to every task in a family (for individual tasks you could just incorporate the extra commands into the main command scripting). See also pre-command scripting, above.

[runtime] [[__NAME__]] retry delays A list of time intervals in minutes, after which to resubmit the task if it fails. The variable $CYLC_TASK_TRY_NUMBER in the task execution environment is incremented each time, starting from 1 for the first try - this can be used to vary task behavior by try number.

[runtime] [[__NAME__]] submission polling intervals A list of intervals, in minutes, with optional multipliers, after which cylc will poll for status while the task is in the submitted state.

For the polling task communications method this overrides the default submission polling interval in the site/user config files (Section 6). For pyro and ssh task communications polling is not done by default but it can still be configured here as a regular check on the health of submitted tasks.

Each list value is used in turn until the last, which is used repeatedly until finished.

Detaching tasks cannot be polled or killed by cylc - see Section 10.5.

A single interval value is probably appropriate for submission polling.

[runtime] [[__NAME__]] execution polling intervals A list of intervals, in minutes, with optional multipliers, after which cylc will poll for status while the task is in the running state.

For the polling task communications method this overrides the default execution polling interval in the site/user config files (Section 6). For pyro and ssh task communications polling is not done by default but it can still be configured here as a regular check on the health of submitted tasks.

Each list value is used in turn until the last, which is used repeatedly until finished.

Detaching tasks cannot be polled or killed by cylc - see Section 10.5.

[runtime] [[__NAME__]] manual completion If a task’s initiating process detaches and exits before task processing is finished then cylc cannot arrange for the task to automatically signal when it has succeeded or failed. In such cases you must use this configuration item to tell cylc not to arrange for automatic completion messaging, and insert some minimal completion messaging yourself in appropriate places in the task implementation (see Section 10.5).

[runtime] [[__NAME__]] work sub-directory Task command scripting is executed from with automatically created work directories, which can be accessed by their tasks through $CYLC_TASK_WORK_DIR. This items sets the low-level sub-directory name. The default value provides a unique workspace for each task, but this can overridden to make groups of tasks run in the same working directory, thereby providing a share space for tasks that read and write from their current working directories.

[runtime] [[__NAME__]] enable resurrection If a message is received from a failed task cylc will normally treat this as an error condition, issue a warning, and leave the task in the “failed” state. But if “enable resurrection” is switched on failed tasks can come back from the dead: if the same task job script is executed again cylc will put the task back into the running state and continue as normal when the started message is received. This can be used to handle HPC-style job preemption wherein a resource manager may kill a running task and reschedule it to run again later, to make way for a job with higher immediate priority. See also Section 12.15, Handling Job Preemption

[runtime] [[__NAME__]] [[[dummy mode]]] Dummy mode configuration.

[runtime] [[__NAME__]] [[[dummy mode]]] command scripting The scripting to execute when the associated task is ready to run, in dummy mode - this can be a single command or a multiple lines of scripting.

[runtime] [[__NAME__]] [[[dummy mode]]] disable pre-command scripting This disables pre-command scripting, is likely to contain code specific to the real task, in dummy mode.

[runtime] [[__NAME__]] [[[dummy mode]]] disable post-command scripting This disables post-command scripting, which is likely to contain code specific to the real task, in dummy mode.

[runtime] [[__NAME__]] [[[simulation mode]]] Simulation mode configuration.

[runtime] [[__NAME__]] [[[simulation mode]]] run time range This defines an interval [min,max) (seconds) from within which the the simulation mode task run length will be randomly chosen.

[runtime] [[__NAME__]] [[[job submission]]] This section configures the means by which cylc submits task job scripts to run.

[runtime] [[__NAME__]] [[[job submission]]] method See Task Job Submission (Section 11) for how job submission works, and how to define new methods. Cylc has a number of built in job submission methods:

[runtime] [[__NAME__]] [[[job submission]]] command template This allows you to override the actual command used by the chosen job submission method. The template’s first %s will be substituted by the job file path. Where applicable the second and third %s will be substituted by the paths to the job stdout and stderr files.

[runtime] [[__NAME__]] [[[job submission]]] shell This is the shell used to interpret the job script submitted by cylc when a task is ready to run. It has no bearing on the shell used in task implementations. Command scripting and suite environment variable assignment expressions must be valid for this shell. The latter is currently hardwired into cylc as export item=value - valid for both bash and ksh because value is entirely user-defined - but cylc would have to be modified slightly to allow use of the C shell.

[runtime] [[__NAME__]] [[[job submission]]] retry delays A list of time intervals in minutes, after which to resubmit if job submission fails.

[runtime] [[__NAME__]] [[[remote]]] Configure host and username, for tasks that do not run on the suite host account. Passwordless ssh is used to submit the task by the configured job submission method, so you must distribute your ssh key to allow this. Cylc must be installed on remote task hosts, but of the external software dependencies only Pyro is required there (not even that if ssh messaging is used; see below).

[runtime] [[__NAME__]] [[[remote]]] host The remote host for this namespace. This can be a static hostname, an environment variable that holds a hostname, or a command that prints a hostname to stdout. Host selection commands are executed just prior to job submission. The host (static or dynamic) may have an entry in the cylc site or user config file to specify parameters such as the location of cylc on the remote machine; if not, the corresponding local settings (on the suite host) will be assumed to apply on the remote host.

[runtime] [[__NAME__]] [[[remote]]] owner The username of the task host account. This is (only) used in the passwordless ssh command invoked by cylc to submit the remote task (consequently it may be defined using local environment variables (i.e. the shell in which cylc runs, and [cylc] [[environment]]).

If you use dynamic host selection and have different usernames on the different selectable hosts, you can configure your $HOME/.ssh/config to handle username translation.

[runtime] [[__NAME__]] [[[remote]]] suite definition directory The path to the suite definition directory on the remote host, needed if remote tasks require access to files stored there (via $CYLC_SUITE_DEF_PATH) or in the suite bin directory (via $PATH). If this item is not defined, the local suite definition directory path will be assumed, with the suite owner’s home directory, if present, replaced by '$HOME' for interpretation on the remote host.

[runtime] [[__NAME__]] [[[event hooks]]] Cylc has internal “hooks” to which you can attach handlers that are called by cylc whenever certain events occur. This section configures task event hooks; see Section A.2.8 for suite event hooks.

Event handlers can send an email or an SMS, call a pager, intervene in the operation of their own suite, or whatever. They can be held in the suite bin directory, otherwise it is up to you to ensure their location is in $PATH (in the shell in which cylc runs, on the suite host). cylc [hook] email-task is a simple task event handler.

Task event handlers are called by cylc with the following arguments:

 
<task-event-handler> EVENT SUITE TASK MESSAGE

where,

Additional information can be passed to event handlers via the [cylc] [[environment]] (but not via task runtime environments - event handlers are not called by tasks).

[runtime] [[__NAME__]] [[[event hooks]]] EVENT handler Specify a handler script to call when one of the following EVENTs occurs:

Item details:

[runtime] [[__NAME__]] [[[event hooks]]] submission timeout If a task has not started the specified number of minutes after it was submitted, the submission timeout event handler will be called.

[runtime] [[__NAME__]] [[[event hooks]]] execution timeout If a task has not finished the specified number of minutes after it started running, the execution timeout event handler will be called.

[runtime] [[__NAME__]] [[[event hooks]]] reset timer If you set an execution timeout the timer can be reset to zero every time a message is received from the running task (which indicates the task is still alive). Otherwise, the task will timeout if it does not finish in the alotted time regardless of incoming messages.

[runtime] [[__NAME__]] [[[environment]]] The user defined task execution environment. Variables defined here can refer to cylc suite and task identity variables, which are exported earlier in the task job script, and variable assignment expressions can use cylc utility commands because access to cylc is also configured earlier in the script. See also Task Execution Environment, Section 9.4.7.

[runtime] [[__NAME__]] [[[environment]]] __VARIABLE__ Replace __VARIABLE__ with any number of environment variable assignment expressions. Order of definition is preserved so values can refer to previously defined variables. Values are passed through to the task job script without evaluation or manipulation by cylc, so any variable assignment expression that is legal in the job submission shell can be used. White space around the ‘=’ is allowed (as far as cylc’s suite.rc parser is concerned these are just normal configuration items).

[runtime] [[__NAME__]] [[[environment filter]]] This section contains environment variable inclusion and exclusion lists that can be used to filter the inherited environment. This is not intended as an alternative to a well-designed inheritance hierarchy that provides each task with just the variables it needs. Filters can, however, improve suites with tasks that inherit a lot of environment they don’t need, by making it clear which tasks use which variables. They can optionally be used routinely as explicit “task environment interfaces” too, at some cost to brevity, because they guarantee that variables filtered out of the inherited task environment are not used.

Note that environment filtering is done after inheritance is completely worked out, not at each level on the way, so filter lists in higher-level namespaces only have an effect if they are not overridden by descendants.

[runtime] [[__NAME__]] [[[environment filter]]] include If given, only variables named in this list will be included from the inherited environment, others will be filtered out. Variables may also be explicitly excluded by an exclude list.

[runtime] [[__NAME__]] [[[environment filter]]] exclude Variables named in this list will be filtered out of the inherited environment. Variables may also be implicitly excluded by omission from an include list.

[runtime] [[__NAME__]] [[[directives]]] Batch queue scheduler directives. Whether or not these are used depends on the job submission method. For the built-in loadleveler, pbs, and sge methods directives are written to the top of the task job script in the correct format for the method. Specifying directives individually like this allows use of default directives that can be individually overridden at lower levels of the runtime namespace hierarchy.

[runtime] [[__NAME__]] [[[directives]]] __DIRECTIVE__ Replace __DIRECTIVE__ with each directive assignment, e.g. class = parallel

Example directives for the built-in job submission methods are shown in Section 11.3.

[runtime] [[__NAME__]] [[[outputs]]] This section is only required if other tasks need to trigger off specific internal outputs of this task (as opposed to triggering off it finishing). The task implementation must report the specified output messages by calling cylc task message when the corresponding real outputs have been completed.

[runtime] [[__NAME__]] [[[outputs]]] __OUTPUT__ Replace __OUTPUT__ with any number of labelled output messages.

[runtime] [[__NAME__]] [[[suite state polling]]] Configure automatic suite polling tasks as described in Section 12.20. The items in this section reflect the options and defaults of the cylc suite-state command, except that the target suite name and the --task, --cycle, and --status options are taken from the graph notation.

[runtime] [[__NAME__]] [[[suite state polling]]] run-dir For your own suites the run database location is determined by your site/user config. For other suites, e.g. those owned by others, or mirrored suite databases, use this item to specify the location of the top level cylc run directory (the database should be a suite-name sub-directory of this location).

[runtime] [[__NAME__]] [[[suite state polling]]] interval Polling interval.

[runtime] [[__NAME__]] [[[suite state polling]]] max-polls The maximum number of polls before timing out and entering the ‘failed’ state.

[runtime] [[__NAME__]] [[[suite state polling]]] user Username of an account on the suite host to which you have access. The polling cylc suite-state command will be invoked on the remote account.

[runtime] [[__NAME__]] [[[suite state polling]]] host The hostname of the target suite. The polling cylc suite-state command will be invoked on the remote account.

[runtime] [[__NAME__]] [[[suite state polling]]] verbose Run the polling cylc suite-state command in verbose output mode.

A.5 [visualization]

Configuration of suite graphing and, where applicable, the gcylc graph view. Graphviz documentation of node shapes and so on can be found at http://www.graphviz.org/Documentation.php.

A.5.1 [visualization] initial cycle time

The cycle time from which to start the suite graph.

A.5.2 [visualization] final cycle time

The cycle time at which to end the suite graph.

A.5.3 [visualization] collapsed families

A list of family (namespace) names to be shown in the collapsed state (i.e. the family members will be replaced by a single family node) when the suite is first plotted in the graph viewer or the gcylc graph view. If this item is not set, the default is to collapse all families at first. Interactive GUI controls can then be used to group and ungroup family nodes at will.

A.5.4 [visualization] use node color for edges

Graph edges (dependency arrows) can be plotted in the same color as the upstream node (task or family) to make paths through a complex graph easier to follow.

A.5.5 [visualization] use node color for labels

Graph node labels can be printed in the same color as the node outline.

A.5.6 [visualization] default node attributes

Set the default attributes (color and style etc.) of graph nodes (tasks and families). Attribute pairs must be quoted to hide the internal = character.

A.5.7 [visualization] default edge attributes

Set the default attributes (color and style etc.) of graph edges (dependency arrows). Attribute pairs must be quoted to hide the internal = character.

A.5.8 [visualization] enable live graph movie

If True, the gcylc graph-view write out a dot-language graph file on every change; these can be post-processed into a movie showing how the suite evolves. The frames will be written to the run time graph directory (see below).

A.5.9 [visualization] [[node groups]]

Define named groups of graph nodes (tasks and families) which can styled en masse, by name, in [visualization] [[node attributes]]. Node groups are automatically defined for all task families, including root, so you can style family and member nodes at once by family name.

[visualization] [[node groups]] __GROUP__ Replace __GROUP__ with each named group of tasks or families.

A.5.10 [visualization] [[node attributes]]

Here you can assign graph node attributes to specific nodes, or to all members of named groups defined in [visualization] [[node groups]]. Task families are automatically node groups. Styling of a family node applies to all member nodes (tasks and sub-families), but precedence is determined by ordering in the suite definition. For example, if you style a family red and then one of its members green, cylc will plot a red family with one green member; but if you style one member green and then the family red, the red family styling will override the earlier green styling of the member.

[visualization] [[node attributes]] __NAME__ Replace __NAME__ with each node or node group for style attribute assignment.

A.5.11 [visualization] [[runtime graph]]

Cylc can generate graphs of dependencies resolved at run time, i.e. what actually triggers off what as the suite runs. This feature is retained mainly for development and debugging purposes. You can use simulation mode or dummy mode to generate runtime graphs very quickly.

[visualization] [[runtime graph]] enable Runtime graphing is disabled by default.

[visualization] [[runtime graph]] cutoff New nodes will be added to the runtime graph as the corresponding tasks trigger, until their cycle time exceeds the initial cycle time by more than this cutoff, in hours.

[visualization] [[runtime graph]] directory Where to put the runtime graph file, runtime-graph.dot.

A.6 Special Placeholder Variables In Suite Definitions

See Section 9.7.

A.7 Default Suite Configuration

Cylc provides, via $CYLC_DIR/conf/suiterc/⋆.spec, sensible default values for many configuration items so that most users will not need to explicitly configure log directories and so on. The defaults are sufficient, in fact, to allow test suites defined by dependency graph alone (command scripting, for example, defaults to printing a simple message, sleeping for a few seconds, and then exiting).

The cylc get-config command parses a suite definition and retrieves configuration values for individual items, sections, or entire suites.

B Site And User Config File Reference

 B.1 Top Level Items
 B.2 [task messaging]
 B.3 [suite logging]
 B.4 [documentation]
 B.5 [document viewers]
 B.6 [editors]
 B.7 [Pyro]
 B.8 [hosts]
 B.9 [suite host self-identification]
 B.10 [suite host scanning]

This section defines all legal items and values for cylc site and user config files. See Site And User Config Files (Section 6) for file locations, intended usage, and how to generate the files using the cylc get-global-config command.

As for suite definitions, Jinja2 expressions can be embedded in site and user config files to generate the final result parsed by cylc. Use of Jinja2 in suite definitions is documented in Section 9.6.

B.1 Top Level Items

B.1.1 temporary directory

A temporary directory is needed by a few cylc commands, and is cleaned automatically on exit. Leave unset for the default (usually $TMPDIR).

B.1.2 state dump rolling archive length

A rolling archive of suite state dumps is maintained under the suite run directory, and is used for restarts; this item determines the number of previous states retained. The most recent saved state file is called state. Sucessively older files have increasing integer values appended, starting from 1.

B.1.3 disable interactive command prompts

Commands that intervene in running suites can be made to ask for confirmation before acting. Some find this annoying and ineffective as a safety measure, however, so command prompts are disabled by default.

B.1.4 enable run directory housekeeping

The suite run directory tree is created anew with every suite start (not restart) but output from the most recent previous runs can be retained in a rolling archive. Set length to 0 to keep no backups. This is incompatible with current Rose suite housekeeping (see Section 14 for more on Rose) so it is disabled by default, in which case new suite run files will overwrite existing ones in the same run directory tree. Rarely, this can result in incorrect polling results due to the presence of old task status files.

B.1.5 run directory rolling archive length

The number of old run directory trees to retain if run directory housekeeping is enabled.

B.1.6 execution polling intervals

Cylc can poll running jobs to catch problems that prevent task messages from being sent back to the suite, such as hard job kills, network outages, or unplanned task host shutdown. Routine polling is done only for the polling task communication method (below) unless suite-specific polling is configured in the suite definition. A list of interval values can be specified, with the last value used repeatedly until the task is finished - this allows more frequent polling near the beginning and end of the anticipated task run time. Multipliers can be used as shorthand as in the example below.

B.1.7 submission polling intervals

Cylc can also poll submitted jobs to catch problems that prevent the submitted job from executing at all, such as deletion from an external batch scheduler queue. Routine polling is done only for the polling task communication method (below) unless suite-specific polling is configured in the suite definition. A list of interval values can be specified as for execution polling (above) but a single value is probably sufficient for job submission polling.

B.2 [task messaging]

This section contains configuration items that affect task-to-suite communications.

B.2.1 [task messaging] retry interval in seconds

If a send fails, the messaging code will retry after a configured delay interval.

B.2.2 [task messaging] maximum number of tries

If successive sends fail, the messaging code will give up after a configured number of tries.

B.2.3 [task messaging] connection timeout in seconds

This is the same as the --pyro-timeout option in cylc commands. Without a timeout Pyro connections to unresponsive suites can hang indefinitely (suites suspended with Ctrl-Z for instance).

B.3 [suite logging]

The suite event log, held under the suite run directory, is maintained as a rolling archive. Logs are rolled over (backed up and started anew) when they reach a configurable limit size.

B.3.1 [suite logging] roll over at start-up

If true, a new suite log will be started for a new suite run.

B.3.2 [suite logging] rolling archive length

How many rolled logs to retain in the archive.

B.3.3 [suite logging] maximum size in bytes

Suite event logs are rolled over when they reach this file size.

B.4 [documentation]

Documentation locations for the cylc doc command and gcylc Help menus.

B.4.1 [documentation] [files]

File locations of documentation held locally on the cylc host server.

[documentation] [files] html index File location of the main cylc documentation index.

[documentation] [files] pdf user guide File location of the cylc User Guide, PDF version.

[documentation] [files] multi-page html user guide File location of the cylc User Guide, multi-page HTML version.

[documentation] [files] single-page html user guide File location of the cylc User Guide, single-page HTML version.

B.4.2 [documentation] [urls]

Online documentation URLs.

[documentation] [urls] internet homepage URL of the cylc internet homepage, with links to documentation for the latest official release.

[documentation] [urls] local index Local intranet URL of the main cylc documentation index.

B.5 [document viewers]

PDF and HTML viewers can be launched by cylc to view the documentation.

B.5.1 [document viewers] pdf

Your preferred PDF viewer program.

B.5.2 [document viewers] html

Your preferred web browser.

B.6 [editors]

Choose your favourite text editor for editing suite definitions.

B.6.1 [editors] terminal

The editor to be invoked by the cylc command line interface.

B.6.2 [editors] gui

The editor to be invoked by the cylc GUI.

B.7 [Pyro]

Pyro is the RPC layer used for network communication between cylc clients (suite-connecting commands and guis) servers (running suites). Each suite listens on a dedicated network port, binding on the first available starting at the configured base port.

B.7.1 [Pyro] base port

The first port that cylc is allowed to use.

B.7.2 [Pyro] maximum number of ports

This determines the maximum number of suites that can run at once on the suite host.

B.7.3 [Pyro] ports directory

Each suite stores its port number, by suite name, under this directory.

B.8 [hosts]

The [hosts] section configures some important host-specific settings for the suite host (‘localhost’) and remote task hosts. Note that remote task behaviour is determined by the site/user config on the suite host, not on the task host. Suites can specify task hosts that are not listed here, in which case local settings will be assumed, with the local home directory path, if present, replaced by $HOME in items that configure directory locations.

B.8.1 [hosts] HOST

The default task host is the suite host, localhost, with default values as listed below. Use an explicit [hosts][[localhost]] section if you need to override the defaults. Localhost settings are then also used as defaults for other hosts, with the local home directory path replaced as described above. This applies to items omitted from an explicit host section, and to hosts that are not listed at all in the site and user config files. Explicit host sections are only needed if the automatically modified local defaults are not sufficient.

Host section headings can also be regular expressions to match multiple hostnames. Note that the general regular expression wildcard is ‘.⋆’ (zero or more of any character), not ‘’. Hostname matching regular expressions are used as-is in the Python re.match() function. As such they match from the beginning of the hostname string (as specified in the suite definition) and they do not have to match through to the end of the string (use the string-end matching character ‘$’ in the expression to force this).

A hierachy of host match expressions from specific to general can be used because config items are processed in the order specified in the file.

[hosts] HOST run directory The top level of the directory tree that holds suite-specific output logs, state dump files, run database, etc.

[hosts] HOST work directory The top level for suite work and share directories.

[hosts] HOST task communication method The means by which task progress messages are reported back to the running suite. See above for default polling intervals for the poll method.

[hosts] HOST remote shell template A string template, containing %s as a placeholder for the host name, for the command used to invoke commands on this host. This is not used on the suite host unless you run local tasks under another user account.

[hosts] HOST use login shell Whether to use a login shell or not for remote command invocation. By default cylc runs remote ssh commands using a login shell,

 
  ssh user@host 'bash --login cylc ...'

which will source /etc/profile and ~/.profile to set up the user environment. However, for security reasons some institutions do not allow unattended commands to start login shells, so you can turn off this behaviour to get,

 
  ssh user@host 'cylc ...'

which will use the default shell on the remote machine, sourcing ~/.bashrc (or ~/.cshrc) to set up the environment. In either case $PATH on the remote machine should include $CYLC_DIR/bin in order for the remote cylc program to be found.

NOTE: this setting does not currently apply to job submission commands (which execute on the suite host to submit remote tasks).

B.9 [suite host self-identification]

The suite host’s identity must be determined locally by cylc and passed to running tasks (via $CYLC_SUITE_HOST) so that task messages can target the right suite on the right host.

B.9.1 [suite host self-identification] method

This item determines how cylc finds the identity of the suite host. For the default name method cylc asks the suite host for its host name. This should resolve on remote task hosts to the IP address of the suite host; if it doesn’t, adjust network settings or use one of the other methods. For the address method, cylc attempts to use a special external “target address” to determine the IP address of the suite host as seen by remote task hosts (in-source documentation in $CYLC_DIR/lib/cylc/suite_host.py explains how this works). And finally, as a last resort, you can choose the hardwired method and manually specify the host name or IP address of the suite host.

B.9.2 [suite host self-identification] target

This item is required for the address self-identification method. If your suite host sees the internet, a common address such as google.com will do; otherwise choose a host visible on your intranet.

B.9.3 [suite host self-identification] host

Use this item to explicitly set the name or IP address of the suite host if you have to use the hardwired self-identification method.

B.10 [suite host scanning]

Utilities such as cylc gsummary need to scan hosts for running suites.

B.10.1 [suite host scanning] hosts

A list of hosts to scan for running suites.

C Command Reference

 C.1 Command Categories
 C.2 Commands
 
 
Cylc ("silk") is a suite engine and metascheduler that specializes in 
cycling weather and climate forecasting suites and related processing 
(but it can also be used for one-off workflows of non-cycling tasks). 
For detailed documentation see the Cylc User Guide (cylc doc --help). 
 
Version 5.4.5 
 
The graphical user interface for cylc is "gcylc" (a.k.a. "cylc gui"). 
 
USAGE: 
  % cylc -v,--version                   # print cylc version 
  % cylc help,--help,-h,?               # print this help page 
 
  % cylc help CATEGORY                  # print help by category 
  % cylc CATEGORY help                  # (ditto) 
 
  % cylc help [CATEGORY] COMMAND        # print command help 
  % cylc [CATEGORY] COMMAND help,--help # (ditto) 
 
  % cylc [CATEGORY] COMMAND [options] SUITE [arguments] 
  % cylc [CATEGORY] COMMAND [options] SUITE TASK [arguments] 
 
Commands and categories can both be abbreviated. Use of categories is 
optional, but they organize help and disambiguate abbreviated commands: 
  % cylc control trigger SUITE TASK     # trigger TASK in SUITE 
  % cylc trigger SUITE TASK             # ditto 
  % cylc con trig SUITE TASK            # ditto 
  % cylc c t SUITE TASK                 # ditto 
 
CYLC SUITE NAMES AND YOUR REGISTRATION DATABASE 
  Suites are addressed by hierarchical names such as suite1, nwp.oper, 
nwp.test.LAM2, etc. in a "name registration database" ($HOME/.cylc/REGDB) 
that simply associates names with the suite definition locations.  The 
'--db=' command option can be used to view and copy suites from other 
users, with access governed by normal filesystem permissions. 
 
TASK IDENTIFICATION IN CYLC SUITES 
  Tasks are identified by NAME.TAG where for cycling tasks TAG is a 
cycle time (YYYY[MM[DD[HH[mm[ss]]]]]) and for asynchronous tasks TAG is 
an integer (just '1' for one-off asynchronous tasks). 
 
HOW TO DRILL DOWN TO COMMAND USAGE HELP: 
  % cylc help           # list all available categories (this page) 
  % cylc help prep      # list commands in category 'preparation' 
  % cylc help prep edit # command usage help for 'cylc [prep] edit' 
 
Command CATEGORIES: 
  all ........... The complete command set. 
  db|database ... Suite name registration, copying, deletion, etc. 
  preparation ... Suite editing, validation, visualization, etc. 
  information ... Interrogate suite definitions and running suites. 
  discovery ..... Detect running suites. 
  control ....... Suite start up, monitoring, and control. 
  utility ....... Cycle arithmetic and templating, housekeeping, etc. 
  task .......... The task messaging interface. 
  hook .......... Suite and task event hook scripts. 
  admin ......... Cylc installation, testing, and example suites. 
  license|GPL ... Software licensing information (GPL v3.0).

C.1 Command Categories

C.1.1 admin
 
CATEGORY: admin - Cylc installation, testing, and example suites. 
 
HELP: cylc [admin] COMMAND help,--help 
  You can abbreviate admin and COMMAND. 
  The category admin may be omitted. 
 
COMMANDS: 
  check-software .... Check required software is installed. 
  import-examples ... Import example suites your suite name database 
  test-battery ...... Run a battery of self-diagnosing test suites 
  test-db ........... Run an automated suite name database test 
  upgrade-db ........ Upgrade a pre-cylc-5.4 suite name database

C.1.2 all
 
CATEGORY: all - The complete command set. 
 
HELP: cylc [all] COMMAND help,--help 
  You can abbreviate all and COMMAND. 
  The category all may be omitted. 
 
COMMANDS: 
  broadcast|bcast ............ Change suite [runtime] settings on the fly 
  cat-log|log ................ Print various suite and task log files 
  cat-state .................. Print the state of tasks from the state dump 
  check-software ............. Check required software is installed. 
  check-triggering ........... A suite shutdown event hook for cylc testing 
  checkvars .................. Check required environment variables en masse 
  conditions ................. Print the GNU General Public License v3.0 
  copy|cp .................... Copy a suite or a group of suites 
  cycletime .................. Cycle time arithmetic and filename templating 
  depend ..................... Add prerequisites to tasks in a running suite 
  diff|compare ............... Compare two suite definitions and print differences 
  documentation|browse ....... Display cylc documentation (User Guide etc.) 
  dump ....................... Print the state of tasks in a running suite 
  edit ....................... Edit suite definitions, optionally inlined 
  email-suite ................ A suite event hook script that sends email alerts 
  email-task ................. A task event hook script that sends email alerts 
  failed|task-failed ......... Release task lock and report failure 
  get-config ................. Parse a suite and report configuration values 
  get-directory .............. Retrieve suite definition directory paths 
  get-global-config .......... print or generate site and user config 
  get-job-status ............. Retrieve job status for a task 
  gpanel ..................... Internal interface for GNOME 2 panel applet 
  graph ...................... Plot suite dependency graphs and runtime hierarchies 
  gsummary ................... Summary GUI for monitoring multiple suites 
  gui ........................ (a.k.a. gcylc) cylc GUI for suite control etc. 
  hold ....................... Hold (pause) suites or individual tasks 
  housekeeping ............... Parallel archiving and cleanup on cycle time offsets 
  import-examples ............ Import example suites your suite name database 
  insert ..................... Insert tasks into a running suite 
  jobscript .................. Generate a task job script and print it to stdout 
  kill ....................... Kill submitted or running tasks 
  list|ls .................... List suite tasks and family namespaces 
  lockclient|lc .............. Manual suite and task lock management 
  lockserver ................. The cylc lockserver daemon 
  message|task-message ....... Report progress and completion of outputs 
  monitor .................... An in-terminal suite monitor (see also gcylc) 
  nudge ...................... Cause the cylc task processing loop to be invoked 
  ping ....................... Check that a suite is running 
  poll ....................... Poll submitted or running tasks 
  print ...................... Print registered suites 
  purge ...................... Remove task trees from a running suite 
  random|rnd ................. Generate a random integer within a given range 
  refresh .................... Report invalid registrations and update suite titles 
  register ................... Register a suite for use 
  release|unhold ............. Release (unpause) suites or individual tasks 
  reload ..................... Reload the suite definition at run time 
  remove ..................... Remove tasks from a running suite 
  reregister|rename .......... Change the name of a suite 
  reset ...................... Force one or more tasks to change state. 
  restart .................... Restart a suite from a previous state 
  run|start .................. Start a suite at a given cycle time 
  scan ....................... Scan a host for running suites and lockservers 
  scp-transfer ............... Scp-based file transfer for cylc suites 
  search|grep ................ Search in suite definitions 
  set-runahead ............... Change the runahead limit in a running suite. 
  set-verbosity .............. Change a running suite's logging verbosity 
  show ....................... Print task state (prerequisites and outputs etc.) 
  started|task-started ....... Acquire a task lock and report started 
  stop|shutdown .............. Shut down running suites 
  submit|single .............. Run a single task just as its parent suite would 
  succeeded|task-succeeded ... Release task lock and report succeeded 
  suite-state ................ Query the task states in a suite 
  test-battery ............... Run a battery of self-diagnosing test suites 
  test-db .................... Run an automated suite name database test 
  trigger .................... Manually trigger or re-trigger a task 
  unregister ................. Unregister and optionally delete suites 
  upgrade-db ................. Upgrade a pre-cylc-5.4 suite name database 
  validate ................... Parse and validate suite definitions 
  view ....................... View suite definitions, inlined and Jinja2 processed 
  warranty ................... Print the GPLv3 disclaimer of warranty

C.1.3 control
 
CATEGORY: control - Suite start up, monitoring, and control. 
 
HELP: cylc [control] COMMAND help,--help 
  You can abbreviate control and COMMAND. 
  The category control may be omitted. 
 
COMMANDS: 
  broadcast|bcast ... Change suite [runtime] settings on the fly 
  depend ............ Add prerequisites to tasks in a running suite 
  get-job-status .... Retrieve job status for a task 
  gui ............... (a.k.a. gcylc) cylc GUI for suite control etc. 
  hold .............. Hold (pause) suites or individual tasks 
  insert ............ Insert tasks into a running suite 
  kill .............. Kill submitted or running tasks 
  nudge ............. Cause the cylc task processing loop to be invoked 
  poll .............. Poll submitted or running tasks 
  purge ............. Remove task trees from a running suite 
  release|unhold .... Release (unpause) suites or individual tasks 
  reload ............ Reload the suite definition at run time 
  remove ............ Remove tasks from a running suite 
  reset ............. Force one or more tasks to change state. 
  restart ........... Restart a suite from a previous state 
  run|start ......... Start a suite at a given cycle time 
  set-runahead ...... Change the runahead limit in a running suite. 
  set-verbosity ..... Change a running suite's logging verbosity 
  stop|shutdown ..... Shut down running suites 
  trigger ........... Manually trigger or re-trigger a task

C.1.4 database
 
CATEGORY: db|database - Suite name registration, copying, deletion, etc. 
Suite registrations are held in a simple database $HOME/.cylc/REGDB. 
 
HELP: cylc [db|database] COMMAND help,--help 
  You can abbreviate db|database and COMMAND. 
  The category db|database may be omitted. 
 
COMMANDS: 
  copy|cp ............. Copy a suite or a group of suites 
  get-directory ....... Retrieve suite definition directory paths 
  print ............... Print registered suites 
  refresh ............. Report invalid registrations and update suite titles 
  register ............ Register a suite for use 
  reregister|rename ... Change the name of a suite 
  unregister .......... Unregister and optionally delete suites

C.1.5 discovery
 
CATEGORY: discovery - Detect running suites. 
 
HELP: cylc [discovery] COMMAND help,--help 
  You can abbreviate discovery and COMMAND. 
  The category discovery may be omitted. 
 
COMMANDS: 
  ping ... Check that a suite is running 
  scan ... Scan a host for running suites and lockservers

C.1.6 hook
 
CATEGORY: hook - Suite and task event hook scripts. 
 
HELP: cylc [hook] COMMAND help,--help 
  You can abbreviate hook and COMMAND. 
  The category hook may be omitted. 
 
COMMANDS: 
  check-triggering ... A suite shutdown event hook for cylc testing 
  email-suite ........ A suite event hook script that sends email alerts 
  email-task ......... A task event hook script that sends email alerts

C.1.7 information
 
CATEGORY: information - Interrogate suite definitions and running suites. 
 
HELP: cylc [information] COMMAND help,--help 
  You can abbreviate information and COMMAND. 
  The category information may be omitted. 
 
COMMANDS: 
  cat-log|log ............ Print various suite and task log files 
  cat-state .............. Print the state of tasks from the state dump 
  documentation|browse ... Display cylc documentation (User Guide etc.) 
  dump ................... Print the state of tasks in a running suite 
  get-config ............. Parse a suite and report configuration values 
  get-global-config ...... print or generate site and user config 
  gpanel ................. Internal interface for GNOME 2 panel applet 
  gsummary ............... Summary GUI for monitoring multiple suites 
  gui|gcylc .............. (a.k.a. gcylc) cylc GUI for suite control etc. 
  list|ls ................ List suite tasks and family namespaces 
  monitor ................ An in-terminal suite monitor (see also gcylc) 
  show ................... Print task state (prerequisites and outputs etc.)

C.1.8 license
 
CATEGORY: license|GPL - Software licensing information (GPL v3.0). 
 
HELP: cylc [license|GPL] COMMAND help,--help 
  You can abbreviate license|GPL and COMMAND. 
  The category license|GPL may be omitted. 
 
COMMANDS: 
  conditions ... Print the GNU General Public License v3.0 
  warranty ..... Print the GPLv3 disclaimer of warranty

C.1.9 preparation
 
CATEGORY: preparation - Suite editing, validation, visualization, etc. 
 
HELP: cylc [preparation] COMMAND help,--help 
  You can abbreviate preparation and COMMAND. 
  The category preparation may be omitted. 
 
COMMANDS: 
  diff|compare ... Compare two suite definitions and print differences 
  edit ........... Edit suite definitions, optionally inlined 
  graph .......... Plot suite dependency graphs and runtime hierarchies 
  jobscript ...... Generate a task job script and print it to stdout 
  list|ls ........ List suite tasks and family namespaces 
  search|grep .... Search in suite definitions 
  validate ....... Parse and validate suite definitions 
  view ........... View suite definitions, inlined and Jinja2 processed

C.1.10 task
 
CATEGORY: task - The task messaging interface. 
 
HELP: cylc [task] COMMAND help,--help 
  You can abbreviate task and COMMAND. 
  The category task may be omitted. 
 
COMMANDS: 
  failed|task-failed ......... Release task lock and report failure 
  message|task-message ....... Report progress and completion of outputs 
  started|task-started ....... Acquire a task lock and report started 
  submit|single .............. Run a single task just as its parent suite would 
  succeeded|task-succeeded ... Release task lock and report succeeded

C.1.11 utility
 
CATEGORY: utility - Cycle arithmetic and templating, housekeeping, etc. 
 
HELP: cylc [utility] COMMAND help,--help 
  You can abbreviate utility and COMMAND. 
  The category utility may be omitted. 
 
COMMANDS: 
  checkvars ....... Check required environment variables en masse 
  cycletime ....... Cycle time arithmetic and filename templating 
  housekeeping .... Parallel archiving and cleanup on cycle time offsets 
  lockclient|lc ... Manual suite and task lock management 
  lockserver ...... The cylc lockserver daemon 
  random|rnd ...... Generate a random integer within a given range 
  scp-transfer .... Scp-based file transfer for cylc suites 
  suite-state ..... Query the task states in a suite

C.2 Commands

C.2.1 broadcast
 
Usage: cylc [control] broadcast|bcast [OPTIONS] REG 
 
Override [runtime] config in targeted namespaces in a running suite. 
 
Uses for broadcast include making temporary changes to task behaviour, 
and task-to-downstream-task communication via environment variables. 
 
A broadcast can target any [runtime] namespace for all cycles or for a 
specific cycle.  If a task is affected by specific-cycle and all-cycle 
broadcasts at once, the specific takes precedence. If a task is affected 
by broadcasts to multiple ancestor namespaces, the result is determined 
by normal [runtime] inheritance. 
 
Broadcasts persist, even across suite restarts, until they expire when 
their target cycle time is older than the oldest current in the suite, 
or until they are explicitly cancelled with this command.  All-cycle 
broadcasts do not expire. 
 
For each task the final effect of all broadcasts to all namespaces is 
computed on the fly just prior to job submission.  The --cancel and 
--clear options simply cancel (remove) active broadcasts, they do not 
act directly on the final task-level result. Consequently, for example, 
you cannot broadcast to "all cycles except Tn" with an all-cycle 
broadcast followed by a cancel to Tn (there is no direct broadcast to Tn 
to cancel); and you cannot broadcast to "all members of FAMILY except 
member_n" with a general broadcast to FAMILY followed by a cancel to 
member_n (there is no direct broadcast to member_n to cancel). 
 
To broadcast a variable to all tasks (quote items with internal spaces): 
  % cylc broadcast -s "[environment]VERSE = the quick brown fox" REG 
To cancel the same broadcast: 
  % cylc broadcast --cancel "[environment]VERSE" REG 
 
Use -d/--display to see active broadcasts. Multiple set or cancel 
options can be used on the same command line. Broadcast cannot change 
[runtime] inheritance. 
 
See also 'cylc reload' - reload a modified suite definition at run time. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  -t TAG, --tag=TAG     Target cycle time or tag. Defaults to 'all-cycles' 
                        with --set and --cancel, and nothing with --clear. 
  -n NAME, --namespace=NAME 
                        Target namespace. Defaults to 'root' with --set and 
                        --cancel, and nothing with --clear. 
  -s [SEC]ITEM=VALUE, --set=[SEC]ITEM=VALUE 
                        A [runtime] config item and value to broadcast. 
  -c [SEC]ITEM, --cancel=[SEC]ITEM 
                        An item-specific broadcast to cancel. 
  -C, --clear           Cancel all broadcasts, or with -t/--tag, 
                        -n/--namespace, cancel all broadcasts to targeted 
                        namespaces and/or cycle times. Use '-C -t all-cycles' 
                        to cancel all all-cycle broadcasts without canceling 
                        all specific-cycle broadcasts. 
  -e CYCLE, --expire=CYCLE 
                        Cancel any broadcasts that target cycle times earlier 
                        than, but not inclusive of, CYCLE. 
  -d, --display         Display active broadcasts. 
  -k TASKID, --display-task=TASKID 
                        Print active broadcasts for a given task (NAME.TAG). 
  -b, --box             Use unicode box characters with -d, -k. 
  -r, --raw             With -d/--display or -k/--display-task, write out the 
                        broadcast config structure in raw Python form. 
  --user=USER           Other user account name. This results in command 
                        reinvocation on the remote account. 
  --host=HOST           Other host name. This results in command reinvocation 
                        on the remote account. 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=PATH             Alternative suite registration database location, 
                        defaults to $HOME/.cylc/REGDB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -f, --force           Do not ask for confirmation before acting. Note that 
                        it is not necessary to use this option if interactive 
                        command prompts have been disabled in the site/user 
                        config files.

C.2.2 cat-log
 
Usage: cylc [info] cat-log|log [OPTIONS] REG [TASK-ID] 
Print various log files for suites and tasks that are currently running, 
or have previously finished. 
 
Arguments: 
   REG                     Suite name 
   [TASK-ID]               Print the stdout or stderr log 
of the identified task 
 
Options: 
  -h, --help            show this help message and exit 
  -l, --location        Just print the location of the requested log file. 
  -r INT, --rotation=INT 
                        Rotation number (to view older, rotated suite logs) 
  -o, --stdout          Print suite or task stdout logs (for suites, the 
                        default is to print the event log;  for tasks, the 
                        default is to print the job script). 
  -e, --stderr          Print suite or task stderr logs (see --stdout for 
                        defaults). 
  -t INT, --try-number=INT 
                        Task try number (default 1). 
  --user=USER           Other user account name. This results in command 
                        reinvocation on the remote account. 
  --host=HOST           Other host name. This results in command reinvocation 
                        on the remote account. 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=PATH             Alternative suite registration database location, 
                        defaults to $HOME/.cylc/REGDB.

C.2.3 cat-state
 
Usage: cylc [info] cat-state [OPTIONS] REG 
 
Print the suite state dump file directly to stdout. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help     show this help message and exit 
  -d, --dump     Use the same display format as the 'cylc dump' command. 
  --user=USER    Other user account name. This results in command reinvocation 
                 on the remote account. 
  --host=HOST    Other host name. This results in command reinvocation on the 
                 remote account. 
  -v, --verbose  Verbose output mode. 
  --debug        Run suites in non-daemon mode, and show exception tracebacks. 
  --db=PATH      Alternative suite registration database location, defaults to 
                 $HOME/.cylc/REGDB.

C.2.4 check-software
 
USAGE: cylc [admin] check-software 
 
Check that external software required by cylc is installed. 
 
Options: 
  -h, --help   Print this help message and exit.

C.2.5 check-triggering
 
USAGE: cylc [hook] check-triggering ARGS 
 
This is a cylc shutdown event handler that compares the newly generated 
suite log with a previously generated reference log "reference.log" 
stored in the suite definition directory. Currently it just compares 
runtime triggering information, disregarding event order and timing, and 
fails the suite if there is any difference. This should be sufficient to 
verify correct scheduling of any suite that is not affected by different 
run-to-run conditional triggering. 
 
1) run your suite with "cylc run --generate-reference-log" to generate 
the reference log with resolved triggering information. Check manually 
that the reference run was correct. 
2) run reference tests with "cylc run --reference-test" - this 
automatically sets the shutdown event handler along with a suite timeout 
and "abort if shutdown handler fails", "abort on timeout", and "abort if 
any task fails". 
 
Reference tests can use any run mode: 
  simulation mode - tests that scheduling is equivalent to the reference 
  dummy mode - also tests that task hosting, job submission, job script 
   evaluation, and cylc messaging are not broken. 
  live mode - tests everything (but takes longer with real tasks!) 
 
 If any task fails, or if cylc itself fails, or if triggering is not 
 equivalent to the reference run, the test will abort with non-zero exit 
 status - so reference tests can be used as automated tests to check 
 that changes to cylc have not broken your suites.

C.2.6 checkvars
 
Usage: cylc [util] checkvars [OPTIONS] VARNAMES 
 
Check that each member of a list of environment variables is defined, 
and then optionally check their values according to the chosen 
commandline option. Note that THE VARIABLES MUST BE EXPORTED AS THIS 
SCRIPT NECESSARILY EXECUTES IN A SUBSHELL. 
 
All of the input variables are checked in turn and the results printed. 
If any problems are found then, depending on use of '-w,--warn-only', 
this script either aborts with exit status 1 (error) or emits a stern 
warning and exits with status 0 (success). 
 
Arguments: 
   VARNAMES     Space-separated list of environment variable names. 
 
Options: 
  -h, --help            show this help message and exit 
  -d, --dirs-exist      Check that the variables refer to directories that 
                        exist. 
  -c, --create-dirs     Attempt to create the directories referred to by the 
                        variables, if they do not already exist. 
  -p, --create-parent-dirs 
                        Attempt to create the parent directories of files 
                        referred to by the variables, if they do not already 
                        exist. 
  -f, --files-exist     Check that the variables refer to files that exist. 
  -i, --int             Check that the variables refer to integer values. 
  -s, --silent          Do not print the result of each check. 
  -w, --warn-only       Print a warning instead of aborting with error status.

C.2.7 conditions
 
USAGE: cylc [license] warranty [--help] 
Cylc is release under the GNU General Public License v3.0 
This command prints the GPL v3.0 license in full. 
 
Options: 
  --help   Print this usage message.

C.2.8 copy
 
Usage: cylc [db] copy|cp [OPTIONS] REG REG2 TOPDIR 
 
Copy suite or group REG to TOPDIR, and register the copy as REG2. 
 
Consider the following three suites: 
 
% cylc db print '^foo'     # printed in flat form 
foo.bag     | "Test Suite Zero" | /home/bob/zero 
foo.bar.qux | "Test Suite Two"  | /home/bob/two 
foo.bar.baz | "Test Suite One"  | /home/bob/one 
 
% cylc db print -t '^foo'  # printed in tree from 
foo 
 |-bag    "Test Suite Zero" | /home/bob/zero 
 ‘-bar 
   |-baz  "Test Suite One"  | /home/bob/one 
   ‘-qux  "Test Suite Two"  | /home/bob/two 
 
These suites are stored in a flat directory structure under /home/bob, 
but they are organised in the suite database as a group 'foo' that 
contains the suite 'foo.bag' and a group 'foo.bar', which in turn 
contains the suites 'foo.bar.baz' and 'foo.bar.qux'. 
 
When you copy suites with this command, the target registration names 
are determined by TARGET and the name structure underneath SOURCE, and 
the suite definition directories are copied into a directory tree under 
TOPDIR whose structure reflects the target registration names. If this 
is not what you want, you can copy suite definition directories manually 
and then register the copied directories manually with 'cylc register'. 
 
EXAMPLES (using the three suites above): 
 
% cylc db copy foo.bar.baz red /home/bob       # suite to suite 
  Copying suite definition for red 
% cylc db print "^red" 
  red | "Test Suite One" | /home/bob/red 
 
% cylc copy foo.bar.baz blue.green /home/bob   # suite to group 
  Copying suite definition for blue.green 
% cylc db pr "^blue" 
  blue.green | "Test Suite One" | /home/bob/blue/green 
 
% cylc copy foo.bar orange /home/bob           # group to group 
  Copying suite definition for orange.qux 
  Copying suite definition for orange.baz 
% cylc db pr "^orange" 
  orange.qux | "Test Suite Two" | /home/bob/orange/qux 
  orange.baz | "Test Suite One" | /home/bob/orange/baz 
 
Arguments: 
   REG                  Source suite name 
   REG2                 Target suite name 
   TOPDIR               Top level target directory. 
 
Options: 
  -h, --help      show this help message and exit 
  --db-from=PATH  Copy suites from another DB (defaults to --db). 
  --user=USER     Other user account name. This results in command 
                  reinvocation on the remote account. 
  --host=HOST     Other host name. This results in command reinvocation on the 
                  remote account. 
  -v, --verbose   Verbose output mode. 
  --debug         Run suites in non-daemon mode, and show exception 
                  tracebacks. 
  --db=PATH       Alternative suite registration database location, defaults 
                  to $HOME/.cylc/REGDB.

C.2.9 cycletime
 
Usage: cylc [util] cycletime [OPTIONS] [CYCLE] 
 
Arithmetic cycle time offset computation, and filename templating. 
 
Examples: 
 
1) print offset from an explicit cycle time: 
  % cylc [util] cycletime --offset-hours=6 2010082318 
  2010082400 
 
2) print offset from $CYLC_TASK_CYCLE_TIME (as in suite tasks): 
  % export CYLC_TASK_CYCLE_TIME=2010082318 
  % cylc cycletime --offset-hours=-6 
  2010082312 
 
3) cycle time filename templating, explicit template: 
  % export CYLC_TASK_CYCLE_TIME=201008 
  % cylc cycletime --offset-years=2 --template=foo-YYYYMM.nc 
  foo-201208.nc 
 
4) cycle time filename templating, template in a variable: 
  % export CYLC_TASK_CYCLE_TIME=201008 
  % export MYTEMPLATE=foo-YYYYMM.nc 
  % cylc cycletime --offset-years=2 --template=MYTEMPLATE 
  foo-201208.nc 
 
Arguments: 
   [CYCLE]    YYYY[MM[DD[HH[mm[ss]]]]], default $CYLC_TASK_CYCLE_TIME 
 
Options: 
  -h, --help            show this help message and exit 
  --offset-hours=HOURS  Add N hours to CYCLE (may be negative) 
  --offset-days=DAYS    Add N days to CYCLE (N may be negative) 
  --offset-months=MONTHS 
                        Add N months to CYCLE (N may be negative) 
  --offset-years=YEARS  Add N years to CYCLE (N may be negative) 
  --template=TEMPLATE   Filename template string or variable 
  --print-year          Print only YYYY of result 
  --print-month         Print only MM of result 
  --print-day           Print only DD of result 
  --print-hour          Print only HH of result

C.2.10 depend
 
Usage: cylc [control] depend [OPTIONS] REG TASK DEP 
 
Add new dependencies on the fly to tasks in running suite REG. If DEP 
is a task ID the target TASK will depend on that task finishing, 
otherwise DEP can be an explicit quoted message such as 
  "Data files uploaded for 2011080806" 
(presumably there will be another task in the suite, or you will insert 
one, that reports that message as an output). 
 
Prerequisites added on the fly are not propagated to the successors 
of TASK, and they will not persist in TASK across a suite restart. 
 
Arguments: 
   REG                Suite name 
   TASK               Target task 
   DEP                New dependency 
 
Options: 
  -h, --help          show this help message and exit 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  -f, --force         Do not ask for confirmation before acting. Note that it 
                      is not necessary to use this option if interactive 
                      command prompts have been disabled in the site/user 
                      config files.

C.2.11 diff
 
Usage: cylc [prep] diff|compare [OPTIONS] SUITE1 SUITE2 
 
Compare two suite definitions and display any differences. 
 
Differencing is done after parsing the suite.rc files so it takes 
account of default values that are not explicitly defined, it disregards 
the order of configuration items, and it sees any include-file content 
after inlining has occurred. 
 
Note that seemingly identical suites normally differ due to inherited 
default configuration values (e.g. the default job submission log 
directory. 
 
Files in the suite bin directory and other sub-directories of the 
suite definition directory are not currently differenced. 
 
Arguments: 
   SUITE1               Suite name or path 
   SUITE2               Suite name or path 
 
Options: 
  -h, --help            show this help message and exit 
  -n, --nested          print suite.rc section headings in nested form. 
  --user=USER           Other user account name. This results in command 
                        reinvocation on the remote account. 
  --host=HOST           Other host name. This results in command reinvocation 
                        on the remote account. 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=PATH             Alternative suite registration database location, 
                        defaults to $HOME/.cylc/REGDB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a Jinja2 template variable in the 
                        suite definition. This option can be used multiple 
                        times on the command line.  WARNING: these settings do 
                        not persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line. 
  --set-file=FILE       Set the value of Jinja2 template variables in the 
                        suite definition from a file containing NAME=VALUE 
                        pairs (one per line). WARNING: these settings do not 
                        persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line.

C.2.12 documentation
 
Usage: cylc [info] documentation|browse [OPTIONS] 
 
By default this command opens the cylc documentation index in your 
browser in file:// mode. Alternatively it can open the PDF Cylc User 
Guide directly, or browse the cylc internet homepage, or - if your site 
has a web server with access to the cylc documentation - an intranet 
documentation URL. The browser and PDF reader to use, and the intranet 
URL, is determined by cylc site/user configuration - for details see 
  $ cylc get-global-config --help 
 
Options: 
  -h, --help      show this help message and exit 
  -p, --pdf       Open the PDF User Guide directly 
  -w, --internet  Browse the cylc internet homepage

C.2.13 dump
 
Usage: cylc [info] dump [OPTIONS] REG 
 
Print state information (e.g. the state of each task) from a running 
suite. For small suites 'watch cylc [info] dump SUITE' is an effective 
non-GUI real time monitor (but see also 'cylc monitor'). 
 
For more information about a specific task, such as the current state of 
its prerequisites and outputs, see 'cylc [info] show'. 
 
Examples: 
 Display the state of all running tasks, sorted by cycle time: 
 % cylc [info] dump --tasks --sort SUITE | grep running 
 
 Display the state of all tasks in a particular cycle: 
 % cylc [info] dump -t SUITE | grep 2010082406 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help          show this help message and exit 
  -g, --global        Global information only. 
  -t, --tasks         Task states only. 
  -s, --sort          Task states only; sort by cycle time instead of name. 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation.

C.2.14 edit
 
Usage: cylc [prep] edit [OPTIONS] SUITE 
 
Edit suite definitions without having to move to their directory 
locations, and with optional reversible inlining of include-files. Note 
that Jinja2 suites can only be edited in raw form but the processed 
version can be viewed with 'cylc [prep] view -p'. 
 
1/cylc [prep] edit SUITE 
Change to the suite definition directory and edit the suite.rc file. 
 
2/ cylc [prep] edit -i,--inline SUITE 
Edit the suite with include-files inlined between special markers. The 
original suite.rc file is temporarily replaced so that the inlined 
version is "live" during editing (i.e. you can run suites during 
editing and cylc will pick up changes to the suite definition). The 
inlined file is then split into its constituent include-files 
again when you exit the editor. Include-files can be nested or 
multiply-included; in the latter case only the first inclusion is 
inlined (this prevents conflicting changes made to the same file). 
 
3/ cylc [prep] edit --cleanup SUITE 
Remove backup files left by previous INLINED edit sessions. 
 
INLINED EDITING SAFETY: The suite.rc file and its include-files are 
automatically backed up prior to an inlined editing session. If the 
editor dies mid-session just invoke 'cylc edit -i' again to recover from 
the last saved inlined file. On exiting the editor, if any of the 
original include-files are found to have changed due to external 
intervention during editing you will be warned and the affected files 
will be written to new backups instead of overwriting the originals. 
Finally, the inlined suite.rc file is also backed up on exiting 
the editor, to allow recovery in case of accidental corruption of the 
include-file boundary markers in the inlined file. 
 
The edit process is spawned in the foreground as follows: 
  % <editor> suite.rc 
Where <editor> is defined in the cylc site/user config files. 
 
See also 'cylc [prep] view'. 
 
Arguments: 
   SUITE               Suite name or path 
 
Options: 
  -h, --help     show this help message and exit 
  -i, --inline   Edit with include-files inlined as described above. 
  --cleanup      Remove backup files left by previous inlined edit sessions. 
  -g, --gui      Force use of the configured GUI editor. 
  --user=USER    Other user account name. This results in command reinvocation 
                 on the remote account. 
  --host=HOST    Other host name. This results in command reinvocation on the 
                 remote account. 
  -v, --verbose  Verbose output mode. 
  --debug        Run suites in non-daemon mode, and show exception tracebacks. 
  --db=PATH      Alternative suite registration database location, defaults to 
                 $HOME/.cylc/REGDB.

C.2.15 email-suite
 
USAGE: cylc [hook] email-suite EVENT SUITE MESSAGE 
 
This is a simple suite event hook script that sends an email. 
The command line arguments are supplied automatically by cylc. 
 
For example, to get an email alert when a suite shuts down: 
 
# SUITE.RC 
[cylc] 
   [[environment]] 
      MAIL_ADDRESS = foo@bar.baz.waz 
   [[event hooks]] 
      shutdown handler = cylc email-suite 
 
See the Suite.rc Reference (Cylc User Guide) for more information 
on suite and task event hooks and event handler scripts.

C.2.16 email-task
 
USAGE: cylc [hook] email-task EVENT SUITE TASKID MESSAGE 
 
This is a simple task event hook handler script that sends an email. 
The command line arguments are supplied automatically by cylc. 
 
For example, to get an email alert whenever any task fails: 
 
# SUITE.RC 
[cylc] 
   [[environment]] 
      MAIL_ADDRESS = foo@bar.baz.waz 
[runtime] 
   [[root]] 
      [[[event hooks]]] 
         failed handler = cylc email-task 
 
See the Suite.rc Reference (Cylc User Guide) for more information 
on suite and task event hooks and event handler scripts.

C.2.17 failed
 
Usage: cylc [task] failed [OPTIONS] [REASON] 
 
This command is part of the cylc task messaging interface, used by 
running tasks to communicate progress to their parent suite. 
 
The failed command reports failure of task execution (and releases the 
task lock to the lockserver if necessary). It is automatically called in 
case of an error trapped by the task job script, but it can also be 
called explicitly for self-detected failures if necessary. 
 
Suite and task identity are determined from the task execution 
environment supplied by the suite (or by the single task 'submit' 
command, in which case case the message is just printed to stdout). 
 
See also: 
    cylc [task] message 
    cylc [task] started 
    cylc [task] succeeded 
 
Arguments: 
    REASON        - message explaining why the task failed. 
 
Options: 
  -h, --help     show this help message and exit 
  -v, --verbose  Verbose output mode.

C.2.18 get-config
 
Usage: cylc [info] get-config [OPTIONS] SUITE 
 
Print configuration settings parsed from a suite definition, after 
runtime inheritance processing and including default values for items 
that are not explicitly set in the suite.rc file. 
 
Config items containing spaces must be quoted on the command line. If 
a single item is requested only its value will be printed; otherwise the 
full nested structure below the requested config section is printed. 
 
Example, from a suite registered as foo.bar: 
|# SUITE.RC 
|[runtime] 
|    [[modelX]] 
|        [[[environment]]] 
|            FOO = foo 
|            BAR = bar 
 
$ cylc get-config --item=[runtime][modelX][environment]FOO foo.bar 
foo 
 
$ cylc get-config --item=[runtime][modelX][environment] foo.bar 
FOO = foo 
BAR = bar 
 
$ cylc get-config --item=[runtime][modelX] foo.bar 
... 
[[[environment]]] 
    FOO = foo 
    BAR = bar 
... 
 
Arguments: 
   SUITE               Suite name or path 
 
Options: 
  -h, --help            show this help message and exit 
  -i [SEC...]ITEM, --item=[SEC...]ITEM 
                        The config item to print. Can be used multiple times 
                        on the same command line. 
  -t, --tasks           Print configured task list. 
  -m, --mark-up         Prefix output lines with '!cylc!' to aid in automated 
                        parsing (output can be contaminated by stdout from 
                        login scripts, for example, for remote invocation). 
  -p, --python          Write out the config data structure in Python native 
                        format. 
  --sparse              Only report [runtime] items  explicitly set in the 
                        suite.rc file, not underlying default settings. 
  -o, --one-line        Combine the result from multiple --item requests onto 
                        one line, with internal spaces replaced by the '⋆' 
                        character. For single-value items only. 
  -a, --all-tasks       For [runtime] items (e.g. --item='command scripting') 
                        report values for all tasks prefixed by task name. 
  --user=USER           Other user account name. This results in command 
                        reinvocation on the remote account. 
  --host=HOST           Other host name. This results in command reinvocation 
                        on the remote account. 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=PATH             Alternative suite registration database location, 
                        defaults to $HOME/.cylc/REGDB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a Jinja2 template variable in the 
                        suite definition. This option can be used multiple 
                        times on the command line.  WARNING: these settings do 
                        not persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line. 
  --set-file=FILE       Set the value of Jinja2 template variables in the 
                        suite definition from a file containing NAME=VALUE 
                        pairs (one per line). WARNING: these settings do not 
                        persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line.

C.2.19 get-directory
 
Usage: cylc [db] get-directory REG 
 
Retrieve and print the directory location of suite REG. 
Here's an easy way to move to a suite directory: 
  $ cd $(cylc get-dir REG). 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help     show this help message and exit 
  --user=USER    Other user account name. This results in command reinvocation 
                 on the remote account. 
  --host=HOST    Other host name. This results in command reinvocation on the 
                 remote account. 
  -v, --verbose  Verbose output mode. 
  --debug        Run suites in non-daemon mode, and show exception tracebacks. 
  --db=PATH      Alternative suite registration database location, defaults to 
                 $HOME/.cylc/REGDB.

C.2.20 get-global-config
 
Usage: cylc [admin] get-global-config [OPTIONS] 
 
Print settings from the cylc site and user config files. 
 
0) defaults (internal cylc config file spec) 
1) $CYLC_DIR/conf/siterc/site.rc  # site file (overrides defaults) 
2) $HOME/.cylc/user.rc            # user file (overrides site) 
 
Without options, just validates combined site/user config files. 
 
To create a new site or user config file, e.g.: 
  % cylc get-global-config --print > $HOME/.cylc/user.rc 
 
 
Options: 
  -h, --help            show this help message and exit 
  --print               Write parsed site/user settings to stdout. 
  -v, --verbose         Print extra information. 
  --strict              Abort if either the site or user config file fails 
                        validation (otherwise carry on using default values). 
  --debug               Show exception tracebacks. 
  --print-run-dir       Display the site configured run directory 
  -i [SEC...]ITEM, --item=[SEC...]ITEM 
                        The config item to print. Can be used multiple times 
                        on the same command line. 
  -p, --python          Write out the config data structure in Python native 
                        format.

C.2.21 get-job-status
 
USAGE: cylc [control] get-job-status ST-FILE JOB-SYS JOB-ID 
 
This command is normally invoked automatically by cylc, to poll for job 
status of a task. To determine the current or final state of a task 
known to have been submitted previously, the automatically generated 
task status file must be interpreted after interrogating the batch queue 
(or similar) to see if it is currently waiting, running, or gone 
(finished or failed). 
 
Options: 
  -h, --help   Print this help message and exit. 
 
Arguments: 
  ST-FILE - the task status file (written to the task log directory). 
  JOB-SYS - the name of the job submission system, e.g. pbs. 
  JOB-ID - the job ID in the job submission system.

C.2.22 gpanel
 
Usage: cylc gpanel [OPTIONS] 
 
This is a cylc summary panel applet for monitoring running suites on a set of 
hosts in GNOME 2. 
 
To install this applet, run "cylc gpanel --install" 
and follow the instructions that it gives you. 
 
This applet can be tested using the --test option. 
 
To customize themes, copy $CYLC_DIR/conf/gcylcrc/gcylc.rc.eg to 
$HOME/.cylc/gcylc.rc and follow the instructions in the file. 
 
To configure default suite hosts, edit the 
[suite host scanning]hosts entry in your site.rc file. 
 
Options: 
  -h, --help  show this help message and exit 
  --compact   Switch on compact mode at runtime. 
  --install   Install the panel applet. 
  --test      Run in a standalone window.

C.2.23 graph
 
Usage: 1/ cylc [prep] graph [OPTIONS] SUITE [START [STOP]] 
     Plot the suite.rc dependency graph for SUITE. 
       2/ cylc [prep] graph [OPTIONS] -f,--file FILE 
     Plot the specified dot-language graph file. 
 
Plot cylc dependency graphs in a pannable, zoomable viewer. 
 
The viewer updates automatically when the suite.rc file is saved during 
editing. By default the full cold start graph is plotted; you can omit 
cold start tasks with the '-w,--warmstart' option.  Specify the optional 
initial and final cycle time arguments to override the suite.rc defaults. 
If you just override the intitial cycle, only that cycle will be plotted. 
You can save an image of your graph using the "Save" button on the toolbar. 
 
GRAPH VIEWER CONTROLS: 
     Left-click to center the graph on a node. 
     Left-drag to pan the view. 
     Zoom buttons, mouse-wheel, or ctrl-left-drag to zoom in and out. 
     Shift-left-drag to zoom in on a box. 
     Also: "Best Fit" and "Normal Size". 
     Landscape mode on/off. 
  Family (namespace) grouping controls: 
    Toolbar: 
     "group" - group all families up to root. 
     "ungroup" - recursively ungroup all families. 
    Right-click menu: 
     "group" - close this node's parent family. 
     "ungroup" - open this family node. 
     "recursive ungroup" - ungroup all families below this node. 
 
Arguments: 
   [SUITE]               Suite name or path 
   [START]               Initial cycle time to plot (default=2999010100) 
   [STOP]                Final cycle time to plot (default=2999010123) 
 
Options: 
  -h, --help            show this help message and exit 
  -n, --namespaces      Plot the suite namespace inheritance hierarchy (task 
                        run time properties). 
  -f FILE, --file=FILE  View a specific dot-language graphfile. 
  --user=USER           Other user account name. This results in command 
                        reinvocation on the remote account. 
  --host=HOST           Other host name. This results in command reinvocation 
                        on the remote account. 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=PATH             Alternative suite registration database location, 
                        defaults to $HOME/.cylc/REGDB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a Jinja2 template variable in the 
                        suite definition. This option can be used multiple 
                        times on the command line.  WARNING: these settings do 
                        not persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line. 
  --set-file=FILE       Set the value of Jinja2 template variables in the 
                        suite definition from a file containing NAME=VALUE 
                        pairs (one per line). WARNING: these settings do not 
                        persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line.

C.2.24 gsummary
 
Usage: cylc gsummary [OPTIONS] 
cylc gsummary [OPTIONS] 
 
This is the cylc summary gui for monitoring running suites on a set of 
hosts. 
 
To customize themes copy $CYLC_DIR/conf/gcylcrc/gcylc.rc.eg to 
$HOME/.cylc/gcylc.rc and follow the instructions in the file. 
 
Options: 
  -h, --help            show this help message and exit 
  --user=USER           User account name (defaults to $USER). 
  --host=HOST           Host names to monitor (override site default). 
  --poll-interval=SECONDS 
                        Polling interval (time between updates) in seconds

C.2.25 gui
 
Usage: cylc gui [OPTIONS] [REG] 
gcylc [OPTIONS] [REG] 
 
This is the cylc Graphical User Interface. 
 
Local suites can be opened and switched between from within gcylc. To 
connect to running remote suites (whose passphrase you have installed) 
you must currently use --host and/or --user on the gcylc command line. 
 
Available task state color themes are shown under the View menu. To 
customize themes copy $CYLC_DIR/conf/gcylcrc/gcylc.rc.eg to 
$HOME/.cylc/gcylc.rc and follow the instructions in the file. 
 
Arguments: 
   [REG]               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --print-config        Print combined (system + user) gcylc config, and exit. 
  --user=USER           Other user account name. This results in command 
                        reinvocation on the remote account. 
  --host=HOST           Other host name. This results in command reinvocation 
                        on the remote account. 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=PATH             Alternative suite registration database location, 
                        defaults to $HOME/.cylc/REGDB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a Jinja2 template variable in the 
                        suite definition. This option can be used multiple 
                        times on the command line.  WARNING: these settings do 
                        not persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line. 
  --set-file=FILE       Set the value of Jinja2 template variables in the 
                        suite definition from a file containing NAME=VALUE 
                        pairs (one per line). WARNING: these settings do not 
                        persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line.

C.2.26 hold
 
Usage: cylc [control] hold [OPTIONS] [MATCH TAG] 
 
Hold one or more waiting tasks, or a whole suite. Held tasks do not 
submit even if they are ready to run. 
 
For matching multiple tasks or families at once note that MATCH is 
interpreted as a full regular expression, not a simple shell glob. 
 
See also 'cylc [control] release'. 
 
Arguments: 
   REG                   Suite name 
   [MATCH]               Task or family name matching regular expression 
   [TAG]                 Task cycle time or integer tag 
 
Options: 
  -h, --help          show this help message and exit 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  -f, --force         Do not ask for confirmation before acting. Note that it 
                      is not necessary to use this option if interactive 
                      command prompts have been disabled in the site/user 
                      config files. 
  -m, --family        Match members of named families rather than tasks.

C.2.27 housekeeping
 
Usage: 1/ cylc [util] housekeeping [OPTIONS] SOURCE MATCH OPER OFFSET [TARGET] 
Usage: 2/ cylc [util] housekeeping [options] FILE 
 
Parallel archiving and cleanup of files or directories with names 
that contain a cycle time. Matched items are grouped into batches in 
which members are processed in parallel, by spawned sub-processes. 
Once all batch members have completed, the next batch is processed. 
 
OPERATE ('delete', 'move', or 'copy') on items (files or directories) 
matching a Python-style regular expression MATCH in directory SOURCE 
whose names contain a cycle time (as YYYYMMDDHH, or YYYYMMDD and HH 
separately) more than OFFSET (integer hours) earlier than a base cycle 
time (which can be $CYLC_TASK_CYCLE_TIME if called by a cylc task, or 
otherwise specified on the command line). 
 
FILE is a housekeeping config file containing one or more of lines of: 
 
   VARNAME=VALUE 
   # comment 
   SOURCE    MATCH    OPERATION   OFFSET   [TARGET] 
 
(example: $CYLC_DIR/conf/housekeeping.eg) 
 
MATCH must be a Python-style regular expression (NOT A SHELL GLOB 
EXPRESSION!) to match the names of items to be operated on AND to 
extract the cycle time from the names via one or two parenthesized 
sub-expressions - '(\d{10})' for YYYYMMDDHH, '(\d{8})' and '(\d{2})' 
for YYYYMMDD and HH in either order. Partial matching can be used 
(partial: 'foo-(\d{10})'; full: '^foo-(\d{10})$'). Any additional 
parenthesized sub-expressions, e.g. for either-or matching, MUST 
be of the (?:...) type to avoid creating a new match group. 
 
SOURCE and TARGET must be on the local filesystem and may contain 
environment varables such as $HOME or ${FOO} (e.g. as defined in the 
suite.rc file for suite housekeeping tasks). Variables defined in 
the housekeeping file itself can also be used, as above. 
 
TARGET may contain the strings YYYYMMDDHH, YYYY, MM, DD, HH; these 
will be replaced with the extracted cycle time for each matched item, 
e.g. $ARCHIVE/oper/YYYYMM/DD. 
 
If TARGET is specified for the 'delete' operation, matched items in 
SOURCE will not be deleted unless an identical item is found in 
TARGET. This can be used to check that important files have been 
successfully archived before deleting the originals. 
 
The 'move' and 'copy' operations are aborted if the TARGET/item already 
exists, but a warning is emitted if the source and target items are not 
identical. 
 
To implement a simple ROLLING ARCHIVE of cycle-time labelled files or 
directories: just use 'delete' with OFFSET set to the archive length. 
 
SAFE ARCHIVING: The 'move' operation is safe - it uses Python's 
shutils.move() which renames files on the local disk partition and 
otherwise copies before deleting the original. But for extra safety 
consider two-step archiving and cleanup: 
1/ copy files to archive, then 
2/ delete the originals only if identicals are found in the archive. 
 
Options: 
  -h, --help            show this help message and exit 
  --cycletime=YYYYMMDDHH 
                        Cycle time, defaults to $CYLC_TASK_CYCLE_TIME 
  --mode=MODE           Octal umask for creating new destination directories. 
                        E.g. 0775 for drwxrwxr-x 
  -o LIST, --only=LIST  Only action config file lines matching any member of a 
                        comma-separated list of regular expressions. 
  -e LIST, --except=LIST 
                        Only action config file lines NOT matching any member 
                        of a comma-separated list of regular expressions. 
  -v, --verbose         print the result of every action 
  -d, --debug           print item matching output. 
  -c, --cheapdiff       Assume source and target identical if the same size 
  -b INT, --batchsize=INT 
                        Batch size for parallel processing of matched files. 
                        Members of each batch (matched items) are processed in 
                        parallel; when a batch completes, the next batch 
                        starts. Defaults to a batch size of 1, i.e. sequential 
                        processing.

C.2.28 import-examples
 
 
USAGE: cylc [admin] import-examples DIR [GROUP] 
 
Copy the cylc example suites to DIR/GROUP and register 
them for use under the GROUP suite name group. 
 
Arguments: 
   DIR    destination directory 
   GROUP  suite name group (default: cylc-<version>)

C.2.29 insert
 
Usage: cylc [control] insert [OPTIONS] REG MATCH TAG [STOP] 
 
Insert task proxies into a running suite. Uses of insertion include: 
 1) insert a task that was excluded by the suite definition at start-up. 
 2) reinstate a task that was previously removed from a running suite. 
 3) re-run an old task that cannot be retriggered because its task proxy 
 is no longer live in the a suite. 
 
Be aware that inserted cycling tasks keep on cycling as normal, even if 
another instance of the same task exists at a later cycle (instances of 
the same task at different cycles can coexist, but a newly spawned task 
will not be added to the pool if it catches up to another task with the 
same ID). 
 
See also 'cylc submit', for running tasks without the scheduler. 
 
For matching multiple tasks or families at once note that MATCH is 
interpreted as a full regular expression, not a simple shell glob. 
 
Arguments: 
   REG                  Suite name 
   MATCH                Task or family name matching regular expression 
   TAG                  Cycle time or integer tag 
   [STOP]               Optional stop tag for inserted task. 
 
Options: 
  -h, --help          show this help message and exit 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  -f, --force         Do not ask for confirmation before acting. Note that it 
                      is not necessary to use this option if interactive 
                      command prompts have been disabled in the site/user 
                      config files. 
  -m, --family        Match members of named families rather than tasks.

C.2.30 jobscript
 
 
USAGE: cylc [prep] jobscript [OPTIONS] REG TASK 
 
Generate a task job script and print it to stdout. 
 
Here's how to capture the script in the vim editor: 
  % cylc jobscript REG TASK | vim - 
Emacs unfortunately cannot read from stdin: 
  % cylc jobscript REG TASK > tmp.sh; emacs tmp.sh 
 
This command wraps 'cylc [control] submit --dry-run'. 
Other options (e.g. for suite host and owner) are passed 
through to the submit command. 
 
Options: 
  -h,--help   - print this usage message. 
 (see also 'cylc submit --help') 
 
Arguments: 
  REG         - Registered suite name. 
  TASK        - Task ID (NAME.TAG)

C.2.31 kill
 
Usage: cylc [control] kill [OPTIONS] REG MATCH TAG 
 
Kill a 'submitted' or 'running' task and update the suite state accordingly. 
 
For matching multiple tasks or families at once note that MATCH is 
interpreted as a full regular expression, not a simple shell glob. 
 
Arguments: 
   REG                 Suite name 
   MATCH               Task or family name matching regular expression 
   TAG                 Task cycle time or integer tag 
 
Options: 
  -h, --help          show this help message and exit 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  -f, --force         Do not ask for confirmation before acting. Note that it 
                      is not necessary to use this option if interactive 
                      command prompts have been disabled in the site/user 
                      config files. 
  -m, --family        Match members of named families rather than tasks.

C.2.32 list
 
Usage: cylc [info|prep] list|ls [OPTIONS] SUITE 
 
Print runtime namespace names (tasks and families), the first-parent 
inheritance graph, or actual tasks for a given cycle range. 
 
The first-parent inheritance graph determines the primary task family 
groupings that are collapsible in gcylc suite views and the graph 
viewer tool. To visualize the full multiple inheritance hierarchy use: 
  'cylc graph -n'. 
 
Arguments: 
   SUITE               Suite name or path 
 
Options: 
  -h, --help            show this help message and exit 
  -a, --all-tasks       Print all tasks, not just those used in the graph. 
  -n, --all-namespaces  Print all runtime namespaces, not just tasks. 
  -m, --mro             Print the linear "method resolution order" for each 
                        namespace (the multiple-inheritance precedence order 
                        as determined by the C3 linearization algorithm). 
  -t, --tree            Print the first-parent inheritance hierarchy in tree 
                        form. 
  -b, --box             With -t/--tree, using unicode box characters. Your 
                        terminal must be able to display unicode characters. 
  -w, --with-titles     Print namespaces titles too. 
  -c START[,STOP], --cycles=START[,STOP] 
                        Print the task IDs of the tasks that would actually be 
                        created in the START [through STOP] cycles (or '1's 
                        for non-cycling tasks). 
  --cold                With -c/--range, print tasks as if for a cold-start 
                        from the START cycle (default False). 
  --user=USER           Other user account name. This results in command 
                        reinvocation on the remote account. 
  --host=HOST           Other host name. This results in command reinvocation 
                        on the remote account. 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=PATH             Alternative suite registration database location, 
                        defaults to $HOME/.cylc/REGDB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a Jinja2 template variable in the 
                        suite definition. This option can be used multiple 
                        times on the command line.  WARNING: these settings do 
                        not persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line. 
  --set-file=FILE       Set the value of Jinja2 template variables in the 
                        suite definition from a file containing NAME=VALUE 
                        pairs (one per line). WARNING: these settings do not 
                        persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line.

C.2.33 lockclient
 
Usage: cylc [util] lockclient|lc [OPTIONS] 
 
This is the command line client interface to the cylc lockserver daemon, 
for server interrogation and manual lock management. 
 
Use of the lockserver is optional (see suite.rc documentation) 
 
Manual lock acquisition is mainly for testing purposes, but manual 
release may be required to remove stale locks if a suite or task dies 
without cleaning up after itself. 
 
See also: 
    cylc lockserver 
 
Options: 
  -h, --help            show this help message and exit 
  --acquire-task=SUITE:TASK.CYCLE 
                        Acquire a task lock. 
  --release-task=SUITE:TASK.CYCLE 
                        Release a task lock. 
  --acquire-suite=SUITE 
                        Acquire an exclusive suite lock. 
  --acquire-suite-nonex=SUITE 
                        Acquire a non-exclusive suite lock. 
  --release-suite=SUITE 
                        Release a suite and associated task locks 
  -p, --print           Print all locks. 
  -l, --list            List all locks (same as -p). 
  -c, --clear           Release all locks. 
  -f, --filenames       Print lockserver PID, log, and state filenames. 
  --pyro-timeout=SEC    Set a timeout for Pyro network connections. The 
                        default is no timeout.

C.2.34 lockserver
 
Usage: cylc [util] lockserver [-f CONFIG] ACTION 
 
The cylc lockserver daemon brokers suite and task locks for a single 
user. These locks are analogous to traditional lock files, but they work 
even for tasks that start and finish executing on different hosts. Suite 
locks prevent multiple instances of the same suite from running at the 
same time (even if registered under different names) unless the suite 
allows that. Task locks do the same for individual tasks (even if 
submitted outside of their suite using 'cylc submit'). 
 
The command line user interface for interrogating the daemon, and 
for manual lock management, is 'cylc lockclient'. 
 
Use of the lockserver is optional (see suite.rc documentation). 
 
The lockserver reads a config file that specifies the location of the 
daemon's process ID, state, and log files. The default config file 
is '$CYLC_DIR/conf/lockserver.conf'. You can specify an alternative 
config file on the command line, but then all subsequent interaction 
with the daemon via the lockclient command must also specify the same 
file (this is really only for testing purposes). The default process ID, 
state, and log files paths are relative to $HOME so this should be 
sufficient for all users. 
 
The state file records currently held locks and, if it exists at 
startup, is used to initialize the lockserver (i.e. suite and task locks 
are not lost if the lockserver is killed and restarted). All locking 
activitiy is recorded in the log file. 
 
Arguments: 
  ACTION   -  'start', 'stop', 'status', 'restart', or 'debug' 
               In debug mode the server does not daemonize so its 
               the stdout and stderr streams are not lost. 
 
Options: 
  -h, --help            show this help message and exit 
  -c CONFIGFILE, --config-file=CONFIGFILE 
                        Config file (default $CYLC_DIR/lockserver.conf

C.2.35 message
 
Usage: cylc [task] message [OPTIONS] MESSAGE 
 
This command is part of the cylc task messaging interface, used by 
running tasks to communicate progress to their parent suite. 
 
Suite and task identity are determined from the task execution 
environment supplied by the suite (or by the single task 'submit' 
command, in which case case the message is just printed to stdout). 
 
See also: 
    cylc [task] started 
    cylc [task] succeeded 
    cylc [task] failed 
 
Options: 
  -h, --help            show this help message and exit 
  -p PRIORITY           message priority: NORMAL, WARNING, or CRITICAL; 
                        default NORMAL. 
  --next-restart-completed 
                        Report next restart file(s) completed 
  --all-restart-outputs-completed 
                        Report all restart outputs completed at once. 
  --all-outputs-completed 
                        Report all internal outputs completed at once. 
  -v, --verbose         Verbose output mode.

C.2.36 monitor
 
Usage: cylc [info] monitor [OPTIONS] REG 
 
A terminal-based suite monitor that updates the current state of all 
tasks in real time. It is effective even for quite large suites if 
'--align' is not used. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help          show this help message and exit 
  -a, --align         Align columns by task name. This option is only useful 
                      for small suites. 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation.

C.2.37 nudge
 
Usage: cylc [control] nudge [OPTIONS] REG 
 
Cause the cylc task processing loop to be invoked in a running suite. 
 
This happens automatically when the state of any task changes such that 
task processing (dependency negotation etc.) is required, or if a 
clock-triggered task is ready to run. 
 
The main reason to use this command is to update the "estimated time till 
completion" intervals shown in the tree-view suite control GUI, during 
periods when nothing else is happening. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help          show this help message and exit 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  -f, --force         Do not ask for confirmation before acting. Note that it 
                      is not necessary to use this option if interactive 
                      command prompts have been disabled in the site/user 
                      config files.

C.2.38 ping
 
Usage: cylc [discover] ping [OPTIONS] REG [TASK] 
 
If suite REG (or task TASK in it) is running, exit (silently, unless 
-v,--verbose is specified); else print an error message and exit with 
error status. For tasks, success means the task proxy is currently in 
the 'running' state. 
 
Arguments: 
   REG                  Suite name 
   [TASK]               Task NAME.TAG (TAG is cycle time or integer) 
 
Options: 
  -h, --help          show this help message and exit 
  --print-ports       Print the port range from the cylc site config file. 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  -f, --force         Do not ask for confirmation before acting. Note that it 
                      is not necessary to use this option if interactive 
                      command prompts have been disabled in the site/user 
                      config files.

C.2.39 poll
 
Usage: cylc [control] poll [OPTIONS] REG MATCH TAG 
 
Poll a 'submitted' or 'running' task to verify its status. If a job was 
killed by external means this will update the suite accordingly. 
 
Note that automatic job polling can used to track task status on task 
hosts that do not allow any communication by RPC (pyro) or ssh back to 
the suite host - see site/user config file documentation. 
 
Polling is also done automatically on restarting a suite, for any tasks 
that were recorded as submitted or running when the suite went down. 
 
For matching multiple tasks or families at once note that MATCH is 
interpreted as a full regular expression, not a simple shell glob. 
 
Arguments: 
   REG                 Suite name 
   MATCH               Task or family name matching regular expression 
   TAG                 Task cycle time or integer tag 
 
Options: 
  -h, --help          show this help message and exit 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  -f, --force         Do not ask for confirmation before acting. Note that it 
                      is not necessary to use this option if interactive 
                      command prompts have been disabled in the site/user 
                      config files. 
  -m, --family        Match members of named families rather than tasks.

C.2.40 print
 
Usage: cylc [db] print [OPTIONS] [REGEX] 
 
Print suite database registrations. 
 
Note on result filtering: 
  (a) The filter patterns are Regular Expressions, not shell globs, so 
the general wildcard is '.⋆' (match zero or more of anything), NOT '⋆'. 
  (b) For printing purposes there is an implicit wildcard at the end of 
each pattern ('foo' is the same as 'foo.⋆'); use the string end marker 
to prevent this ('foo$' matches only literal 'foo'). 
 
Arguments: 
   [REGEX]               Suite name regular expression pattern 
 
Options: 
  -h, --help     show this help message and exit 
  -t, --tree     Print registrations in nested tree form. 
  -b, --box      Use unicode box drawing characters in tree views. 
  -a, --align    Align columns. 
  -x             don't print suite definition directory paths. 
  -y             Don't print suite titles. 
  --fail         Fail (exit 1) if no matching suites are found. 
  --user=USER    Other user account name. This results in command reinvocation 
                 on the remote account. 
  --host=HOST    Other host name. This results in command reinvocation on the 
                 remote account. 
  -v, --verbose  Verbose output mode. 
  --debug        Run suites in non-daemon mode, and show exception tracebacks. 
  --db=PATH      Alternative suite registration database location, defaults to 
                 $HOME/.cylc/REGDB.

C.2.41 purge
 
Usage: cylc [control] purge [OPTIONS] REG TASK STOP 
 
Remove an entire tree of dependent tasks, over multiple cycles into the 
future, from a running suite. The purge top task will be forced to 
spawn and will then be removed, then so will every task that depends on 
it, and every task that depends on those, and so on until the given stop 
cycle time. 
 
WARNING: THIS COMMAND IS DANGEROUS but in case of disaster you can 
restart the suite from the automatic pre-purge state dump (the filename 
will be logged by cylc before the purge is actioned.) 
 
UNDERSTANDING HOW PURGE WORKS: cylc identifies tasks that depend on 
the top task, and then on its downstream dependents, and then on 
theirs, etc., by simulating what would happen if the top task were to 
trigger: it artificially sets the top task to the "succeeded" state 
then negotatiates dependencies and artificially sets any tasks whose 
prerequisites get satisfied to "succeeded"; then it negotiates 
dependencies again, and so on until the stop cycle is reached or nothing 
new triggers. Finally it marks "virtually triggered" tasks for removal. 
Consequently: 
  Dependent tasks will only be identified as such, and purged, if they 
   have already spawned into the top cycle - so let them catch up first. 
  You can't purge a tree of tasks that has already triggered, because 
   the algorithm relies on detecting new triggering. 
Note also the suite runahead limit must be large enough to bridge the 
purge gap or runahead-held tasks may prevent the purge completing fully. 
 
[development note: post cylc-3.0 we could potentially use the suite 
graph to determine downstream tasks to remove, without doing this 
internal triggering simulation.] 
 
Arguments: 
   REG                Suite name 
   TASK               Task (NAME.CYCLE) to start purge 
   STOP               Cycle (inclusive!) to stop purge 
 
Options: 
  -h, --help          show this help message and exit 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  -f, --force         Do not ask for confirmation before acting. Note that it 
                      is not necessary to use this option if interactive 
                      command prompts have been disabled in the site/user 
                      config files.

C.2.42 random
 
Usage: cylc [util] random A B 
 
Generate a random integer in the range [A,B). This is just a command 
interface to Python's random.randrange() function. 
 
Arguments: 
   A     start of the range interval (inclusive) 
   B     end of the random range (exclusive, so must be > A) 
 
Options: 
  -h, --help  show this help message and exit

C.2.43 refresh
 
Usage: cylc [db] refresh [OPTIONS] [REGEX] 
 
Check a suite database for invalid registrations (no suite definition 
directory or suite.rc file) and refresh suite titles in case they have 
changed since the suite was registered. Explicit wildcards must be 
used in the match pattern (e.g. 'f' will not match 'foo.bar' unless 
you use 'f.⋆'). 
 
Arguments: 
   [REGEX]               Suite name match pattern 
 
Options: 
  -h, --help        show this help message and exit 
  -u, --unregister  Automatically unregister invalid registrations. 
  --user=USER       Other user account name. This results in command 
                    reinvocation on the remote account. 
  --host=HOST       Other host name. This results in command reinvocation on 
                    the remote account. 
  -v, --verbose     Verbose output mode. 
  --debug           Run suites in non-daemon mode, and show exception 
                    tracebacks. 
  --db=PATH         Alternative suite registration database location, defaults 
                    to $HOME/.cylc/REGDB.

C.2.44 register
 
Usage: cylc [db] register [OPTIONS] REG PATH 
 
Register the suite definition located in PATH as REG. 
 
Suite names are hierarchical, delimited by '.' (foo.bar.baz); they 
may contain letters, digits, underscore, and hyphens. Colons are not 
allowed because directory paths incorporating the suite name are 
sometimes needed in PATH variables. 
 
EXAMPLES: 
 
For suite definition directories /home/bob/(one,two,three,four): 
 
% cylc db reg bob         /home/bob/one 
% cylc db reg foo.bag     /home/bob/two 
% cylc db reg foo.bar.baz /home/bob/three 
% cylc db reg foo.bar.waz /home/bob/four 
 
% cylc db pr '^foo'             # print in flat form 
  bob         | "Test Suite One"   | /home/bob/one 
  foo.bag     | "Test Suite Two"   | /home/bob/two 
  foo.bar.baz | "Test Suite Four"  | /home/bob/three 
  foo.bar.waz | "Test Suite Three" | /home/bob/four 
 
% cylc db pr -t '^foo'          # print in tree form 
  bob        "Test Suite One"   | /home/bob/one 
  foo 
   |-bag     "Test Suite Two"   | /home/bob/two 
   ‘-bar 
     |-baz   "Test Suite Three" | /home/bob/three 
     ‘-waz   "Test Suite Four"  | /home/bob/four 
 
Arguments: 
   REG                Suite name 
   PATH               Suite definition directory 
 
Options: 
  -h, --help     show this help message and exit 
  --user=USER    Other user account name. This results in command reinvocation 
                 on the remote account. 
  --host=HOST    Other host name. This results in command reinvocation on the 
                 remote account. 
  -v, --verbose  Verbose output mode. 
  --debug        Run suites in non-daemon mode, and show exception tracebacks. 
  --db=PATH      Alternative suite registration database location, defaults to 
                 $HOME/.cylc/REGDB.

C.2.45 release
 
Usage: cylc [control] release|unhold [OPTIONS] REG [MATCH] [TAG] 
 
Release one or more held tasks, or a whole suite. Held tasks do not 
submit even if they are ready to run. 
 
For matching multiple tasks or families at once note that MATCH is 
interpreted as a full regular expression, not a simple shell glob. 
 
See also 'cylc [control] hold'. 
 
Arguments: 
   REG                   Suite name 
   [MATCH]               Task or family name matching regular expression 
   [TAG]                 Task cycle time or integer tag 
 
Options: 
  -h, --help          show this help message and exit 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  -f, --force         Do not ask for confirmation before acting. Note that it 
                      is not necessary to use this option if interactive 
                      command prompts have been disabled in the site/user 
                      config files. 
  -m, --family        Match members of named families rather than tasks.

C.2.46 reload
 
Usage: cylc [control] reload [OPTIONS] REG 
 
Tell a suite to reload its definition at run time. All settings 
including task definitions, with the exception of suite log 
configuration, can be changed on reload. Note that defined tasks can be 
be added to or removed from a running suite with the 'cylc insert' and 
'cylc remove' commands, without reloading. This command also allows 
addition and removal of actual task definitions, and therefore insertion 
of tasks that were not defined at all when the suite started (you will 
still need to manually insert a particular instance of a newly defined 
task). Live task proxies that are orphaned by a reload (i.e. their task 
definitions have been removed) will be removed from the task pool if 
they have not started running yet. Changes to task definitions take 
effect immediately, unless a task is already running at reload time. 
 
If the suite was started with Jinja2 template variables set on the 
command line (cylc run --set FOO=bar REG) the same template settings 
apply to the reload (only changes to the suite.rc file itself are 
reloaded). 
 
If the modified suite definition does not parse, failure to reload will 
be reported but no harm will be done to the running suite. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help          show this help message and exit 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  -f, --force         Do not ask for confirmation before acting. Note that it 
                      is not necessary to use this option if interactive 
                      command prompts have been disabled in the site/user 
                      config files.

C.2.47 remove
 
Usage: cylc [control] remove [OPTIONS] REG [MATCH] TAG 
 
Remove one or more tasks, or all tasks with a common TAG (cycle time or 
integer tag) from a running suite. 
 
Tasks will spawn successors first if they have not done so already. 
 
For matching multiple tasks or families at once note that MATCH is 
interpreted as a full regular expression, not a simple shell glob. 
 
Arguments: 
   REG                   Suite name 
   [MATCH]               Task or family name matching regular expression 
   TAG                   Task cycle time or integer tag 
 
Options: 
  -h, --help          show this help message and exit 
  --no-spawn          Do not spawn successors before removal. 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  -f, --force         Do not ask for confirmation before acting. Note that it 
                      is not necessary to use this option if interactive 
                      command prompts have been disabled in the site/user 
                      config files. 
  -m, --family        Match members of named families rather than tasks.

C.2.48 reregister
 
Usage: cylc [db] reregister|rename [OPTIONS] REG1 REG2 
 
Change the name of a suite (or group of suites) from REG1 to REG2. 
Example: 
  cylc db rereg foo.bar.baz test.baz 
 
Arguments: 
   REG1               original name 
   REG2               new name 
 
Options: 
  -h, --help     show this help message and exit 
  --user=USER    Other user account name. This results in command reinvocation 
                 on the remote account. 
  --host=HOST    Other host name. This results in command reinvocation on the 
                 remote account. 
  -v, --verbose  Verbose output mode. 
  --debug        Run suites in non-daemon mode, and show exception tracebacks. 
  --db=PATH      Alternative suite registration database location, defaults to 
                 $HOME/.cylc/REGDB.

C.2.49 reset
 
Usage: cylc [control] reset [OPTIONS] REG MATCH TAG 
 
Reset one or more tasks in a running suite to one of the following states: 
   'waiting' .... prerequisites not satisfied 
   'ready' ...... prerequisites satisfied 
   'succeeded' .. outputs completed 
   'failed' ..... failed 
 
Additionally you can choose: 
   'spawn' ...... force tasks to spawn if they haven't done so already 
 
Tasks set to 'ready' will trigger immediately (see also "cylc trigger"). 
 
If a failed "sequential" task cannot re-run, forcing it to spawn may 
be required as sequential tasks only spawn on succeeding (or consider 
resetting it to "succeeded" or "cylc insert"ing the next instance). 
 
For matching multiple tasks or families at once note that MATCH is 
interpreted as a full regular expression, not a simple shell glob. 
 
Arguments: 
   REG                 Suite name 
   MATCH               Task or family name matching regular expression 
   TAG                 Task cycle time or integer tag 
 
Options: 
  -h, --help            show this help message and exit 
  -s STATE, --state=STATE 
                        Reset task state to STATE, must be one of waiting 
                        ready succeeded failed spawn 
  --user=USER           Other user account name. This results in command 
                        reinvocation on the remote account. 
  --host=HOST           Other host name. This results in command reinvocation 
                        on the remote account. 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=PATH             Alternative suite registration database location, 
                        defaults to $HOME/.cylc/REGDB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -f, --force           Do not ask for confirmation before acting. Note that 
                        it is not necessary to use this option if interactive 
                        command prompts have been disabled in the site/user 
                        config files. 
  -m, --family          Match members of named families rather than tasks.

C.2.50 restart
 
Usage: cylc [control] restart [OPTIONS] REG [FILE] 
 
Restart a suite from a previous state. To start from scratch see the 
'cylc run' command. 
 
Suites run in daemon mode unless -n/--no-detach or --debug is used. 
 
The most recent previous state is loaded by default, but other states 
can be specified on the command line (cylc writes special state dumps 
and logs their filenames before actioning intervention commands). 
 
Tasks recorded as 'submitted' or 'running' will be polled to determine 
where they got to while the suite was down. 
 
Arguments: 
   REG                  Suite name 
   [FILE]               Optional state dump file, assumed to reside in the 
                        suite state dump directory unless an absolute path 
                        is given. Defaults to the most recent suite state. 
 
Options: 
  -h, --help            show this help message and exit 
  --non-daemon          (deprecated: use --no-detach) 
  -n, --no-detach       Do not daemonize the suite 
  --profile             Output profiling (performance) information 
  --ignore-final-cycle  Ignore the final cycle time in the state dump. If one 
                        isspecified in the suite definition it will be used, 
                        however. 
  --ignore-initial-cycle 
                        Ignore the initial cycle time in the state dump. If 
                        one is specified in the suite definition it will be 
                        used, however. In a restart this is only used to set 
                        $CYLC_SUITE_INITIAL_CYCLE_TIME. 
  --until=CYCLE         Shut down after all tasks have PASSED this cycle time. 
  --hold                Hold (don't run tasks) immediately on starting. 
  --hold-after=CYCLE    Hold (don't run tasks) AFTER this cycle time. 
  -m STRING, --mode=STRING 
                        Run mode: live, simulation, or dummy; default is live. 
  --reference-log       Generate a reference log for use in reference tests. 
  --reference-test      Do a test run against a previously generated reference 
                        log. 
  --user=USER           Other user account name. This results in command 
                        reinvocation on the remote account. 
  --host=HOST           Other host name. This results in command reinvocation 
                        on the remote account. 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=PATH             Alternative suite registration database location, 
                        defaults to $HOME/.cylc/REGDB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a Jinja2 template variable in the 
                        suite definition. This option can be used multiple 
                        times on the command line.  WARNING: these settings do 
                        not persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line. 
  --set-file=FILE       Set the value of Jinja2 template variables in the 
                        suite definition from a file containing NAME=VALUE 
                        pairs (one per line). WARNING: these settings do not 
                        persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line.

C.2.51 run
 
Usage: cylc [control] run|start [OPTIONS] REG [START] 
 
Start a suite from scratch. To restart from a previous state see the 
'cylc restart' command. 
 
Suites run in daemon mode unless -n/--no-detach or --debug is used. 
 
The following are all equivalent if no intercycle dependence exists: 
  1/ Cold start (default)    : use special cold-start tasks 
  2/ Warm start (-w,--warm)  : assume a previous cycle 
  3/ Raw  start (-r,--raw)   : assume nothing 
 
1/ COLD START -- any designated cold-start tasks will be inserted in the 
waiting state. The variable $CYLC_SUITE_INITIAL_CYCLE_TIME will be set 
to the initial cycle time, in task environments. 
 
2/ WARM START -- any designated cold-start tasks will be inserted in the 
succeeded state, to stand in for a previous cycle. The variable 
$CYLC_SUITE_INITIAL_CYCLE_TIME will be set to 'None' in task environments 
unless '--ict' is used. 
 
3/ RAW START -- do not insert any cold-start tasks (mainly for testing). 
 
In task environments, $CYLC_SUITE_FINAL_CYCLE_TIME is always set to the 
final cycle time if one is set (by suite.rc file or command line). The 
initial and final cycle time variables persists across suite restarts. 
 
Arguments: 
   REG                   Suite name 
   [START]               Initial cycle time or 'now'; overrides the 
                         suite definition. 
 
Options: 
  -h, --help            show this help message and exit 
  --non-daemon          (deprecated: use --no-detach) 
  -n, --no-detach       Do not daemonize the suite 
  --profile             Output profiling (performance) information 
  -w, --warm            Warm start the suite 
  -r, --raw             Raw start the suite 
  --ict                 Set $CYLC_SUITE_INITIAL_CYCLE_TIME to the initial 
                        cycle time even in a warm start (as for cold starts). 
  --until=CYCLE         Shut down after all tasks have PASSED this cycle time. 
  --hold                Hold (don't run tasks) immediately on starting. 
  --hold-after=CYCLE    Hold (don't run tasks) AFTER this cycle time. 
  -m STRING, --mode=STRING 
                        Run mode: live, simulation, or dummy; default is live. 
  --reference-log       Generate a reference log for use in reference tests. 
  --reference-test      Do a test run against a previously generated reference 
                        log. 
  --user=USER           Other user account name. This results in command 
                        reinvocation on the remote account. 
  --host=HOST           Other host name. This results in command reinvocation 
                        on the remote account. 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=PATH             Alternative suite registration database location, 
                        defaults to $HOME/.cylc/REGDB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a Jinja2 template variable in the 
                        suite definition. This option can be used multiple 
                        times on the command line.  WARNING: these settings do 
                        not persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line. 
  --set-file=FILE       Set the value of Jinja2 template variables in the 
                        suite definition from a file containing NAME=VALUE 
                        pairs (one per line). WARNING: these settings do not 
                        persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line.

C.2.52 scan
 
Usage: cylc [discover] scan [OPTIONS] 
 
Detect (by port scanning) running cylc suites and lockservers, and 
print the results. By default only your own running suites will be 
printed.  With --verbose you will also get "Connection Denied" from 
running suites owned by others on the same host. 
 
Simple space-delimited output format for easy parsing: 
    SUITE OWNER HOST PORT 
Here's one way to parse 'cylc scan' output by shell script: 
________________________________________________________________ 
#!/bin/bash 
# parse suite, owner, host, port from 'cylc scan' output lines 
OFIS=$IFS 
IFS=$' 
'; for LINE in $( cylc scan ); do 
    # split on space and assign tokens to positional parameters: 
    IFS=$' '; set $LINE 
    echo "$1 - $2 - $3 - $4" 
done 
IFS=$OFIS 
---------------------------------------------------------------- 
 
 
Arguments: 
 
Options: 
  -h, --help          show this help message and exit 
  --print-ports       Print the port range from the site config file 
                      ($CYLC_DIR/conf/siterc/site.rc). 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation.

C.2.53 scp-transfer
 
Usage: cylc [util] scp-transfer [OPTIONS] 
 
An scp wrapper for transferring a list of files and/or directories 
at once. The source and target scp URLs can be local or remote (scp 
can transfer files between two remote hosts). Passwordless ssh must 
be configured appropriately. 
 
ENVIRONMENT VARIABLE INPUTS: 
$SRCE  - list of sources (files or directories) as scp URLs. 
$DEST  - parallel list of targets as scp URLs. 
The source and destination lists should be space-separated. 
 
We let scp determine the validity of source and target URLs. 
Target directories are created pre-copy if they don't exist. 
 
Options: 
 -v     - verbose: print scp stdout. 
 --help - print this usage message.

C.2.54 search
 
Usage: cylc [prep] search|grep [OPTIONS] SUITE PATTERN [PATTERN2...] 
 
Search for pattern matches in suite definitions and any files in the 
suite bin directory. Matches are reported by line number and suite 
section. An unquoted list of PATTERNs will be converted to an OR'd 
pattern. Note that the order of command line arguments conforms to 
normal cylc command usage (suite name first) not that of the grep 
command. 
 
Note that this command performs a text search on the suite definition, 
it does not search the data structure that results from parsing the 
suite definition - so it will not report implicit default settings. 
 
For case insenstive matching use '(?i)PATTERN'. 
 
Arguments: 
   SUITE                       Suite name or path 
   PATTERN                     Python-style regular expression 
   [PATTERN2...]               Additional search patterns 
 
Options: 
  -h, --help     show this help message and exit 
  -x             Do not search in the suite bin directory 
  --user=USER    Other user account name. This results in command reinvocation 
                 on the remote account. 
  --host=HOST    Other host name. This results in command reinvocation on the 
                 remote account. 
  -v, --verbose  Verbose output mode. 
  --debug        Run suites in non-daemon mode, and show exception tracebacks. 
  --db=PATH      Alternative suite registration database location, defaults to 
                 $HOME/.cylc/REGDB.

C.2.55 set-runahead
 
Usage: cylc [control] set-runahead [OPTIONS] REG [HOURS] 
 
Change the suite runahead limit in a running suite. This is the number of 
hours that the fastest task is allowed to get ahead of the slowest. If a 
task spawns beyond that limit it will be held back from running until the 
slowest tasks catch up enough. WARNING: if you omit HOURS no runahead 
limit will be set - DO NOT DO THIS for for any cycling suite that has 
no near stop cycle set and is not constrained by clock-triggered 
tasks. 
 
Arguments: 
   REG                   Suite name 
   [HOURS]               Runahead limit (default: no limit) 
 
Options: 
  -h, --help          show this help message and exit 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  -f, --force         Do not ask for confirmation before acting. Note that it 
                      is not necessary to use this option if interactive 
                      command prompts have been disabled in the site/user 
                      config files.

C.2.56 set-verbosity
 
Usage: cylc [control] set-verbosity [OPTIONS] REG LEVEL 
 
Change the logging priority level of a running suite.  Only messages at 
or above the chosen priority level will be logged; for example, if you 
choose 'warning', only warning, error, and critical messages will be 
logged. The 'info' level is appropriate under most circumstances. 
 
Arguments: 
   REG                 Suite name 
   LEVEL               debug, info, warning, error, or critical 
 
Options: 
  -h, --help          show this help message and exit 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  -f, --force         Do not ask for confirmation before acting. Note that it 
                      is not necessary to use this option if interactive 
                      command prompts have been disabled in the site/user 
                      config files.

C.2.57 show
 
Usage: cylc [info] show [OPTIONS] REG [NAME[.TAG]] 
 
Interrogate a running suite for its title and task list, task 
descriptions, current state of task prerequisites and outputs and, for 
clock-triggered tasks, whether or not the trigger time is up yet. 
 
Arguments: 
   REG                        Suite name 
   [NAME[.TAG]]               Task name or ID 
 
Options: 
  -h, --help          show this help message and exit 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation.

C.2.58 started
 
Usage: cylc [task] started [OPTIONS] 
 
This command is part of the cylc task messaging interface, used by 
running tasks to communicate progress to their parent suite. 
 
The started command reports commencement of task execution (and it 
acquires a task lock from the lockserver if necessary). It is 
automatically written to the top of task job scripts by cylc and 
therefore does not need to be called explicitly by task scripting. 
 
Suite and task identity are determined from the task execution 
environment supplied by the suite (or by the single task 'submit' 
command, in which case case the message is just printed to stdout). 
 
See also: 
    cylc [task] message 
    cylc [task] succeeded 
    cylc [task] failed 
 
Options: 
  -h, --help     show this help message and exit 
  -v, --verbose  Verbose output mode.

C.2.59 stop
 
Usage: cylc [control] stop|shutdown [OPTIONS] REG [STOP] 
 
1/ cylc stop REG 
   Clean shutdown - cease submitting tasks and shut down after current 
submitted and running tasks have finished, and event handlers and job 
poll and kill commands have executed. 
 
2/ cylc stop --quick REG 
   Quick shutdown - cease submitting tasks and shut down without waiting 
for current submitted and running tasks to finish, but do wait for event 
handlers and job poll and kill commands to be executed. 
 
3/ cylc stop --now REG 
   Immediate shut down - do not wait on current submitted and running 
tasks, or on queued event handlers and job poll and kill commands. 
 
4/ cylc stop --kill REG 
   Do a clean shutdown after killing current submitted and running tasks. 
 
5/ cylc stop REG STOP 
   Do a clean shutdown after (a) all tasks have succeeded out to cycle 
time STOP, or (b) all tasks have succeeded out to wall clock time STOP 
(YYYY/MM/DD-HH:mm), or (c) task ID STOP has succeeded. 
 
Note that cylc does not shut down automatically at a designated future 
cycle time (either by the "final cycle time" in the suite definition, or 
by usage case 5/ above) if any failed tasks are present in the suite. 
This is to ensure that failed tasks do not go unnoticed. 
 
The command exits immediately unless --max-polls is greater than zero 
in which case it polls to wait for suite shutdown. 
 
Arguments: 
   REG                  Suite name 
   [STOP]               a/ task TAG (cycle time or integer), or 
                        b/ YYYY/MM/DD-HH:mm (clock time), or 
                        c/ TASK (task ID). 
 
Options: 
  -h, --help          show this help message and exit 
  -k, --kill          Shut down cleanly after killing any tasks currently in 
                      the submitted or running states. 
  -n, --now           Shut down immediately. 
  -Q, --quick         Shut down after immediately after running any remaining 
                      event handlers and job poll/kill commands.(see above). 
  --max-polls=INT     Maximum number of polls (default 0). 
  --interval=SECS     Polling interval in seconds (default 60). 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  -f, --force         Do not ask for confirmation before acting. Note that it 
                      is not necessary to use this option if interactive 
                      command prompts have been disabled in the site/user 
                      config files.

C.2.60 submit
 
Usage: cylc [task] submit|single [OPTIONS] REG TASK 
 
Submit a single task to run just as it would be submitted by its suite. 
Task messaging commands will print to stdout but will not attempt to 
communicate with the suite (which does not even need to be running). 
Note that job log file paths are the same as for in-suite tasks. 
 
If the suite is running at the same time and it has acquired an 
exclusive suite lock (which means you cannot running multiple instances 
of the suite at once, even under different registrations) then the 
lockserver will let you 'submit' a task from the suite only under the 
same registration, and only if the task is not locked (i.e. only if 
the same task, NAME.TAG, is not currently running in the suite). 
 
Arguments: 
   REG                Suite name 
   TASK               Target task (NAME.TAG) 
 
Options: 
  -h, --help            show this help message and exit 
  -d, --dry-run         Generate the cylc task execution file for the task and 
                        report how it would be submitted to run. 
  --user=USER           Other user account name. This results in command 
                        reinvocation on the remote account. 
  --host=HOST           Other host name. This results in command reinvocation 
                        on the remote account. 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=PATH             Alternative suite registration database location, 
                        defaults to $HOME/.cylc/REGDB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a Jinja2 template variable in the 
                        suite definition. This option can be used multiple 
                        times on the command line.  WARNING: these settings do 
                        not persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line. 
  --set-file=FILE       Set the value of Jinja2 template variables in the 
                        suite definition from a file containing NAME=VALUE 
                        pairs (one per line). WARNING: these settings do not 
                        persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line.

C.2.61 succeeded
 
Usage: cylc [task] succeeded [OPTIONS] 
 
This command is part of the cylc task messaging interface, used by 
running tasks to communicate progress to their parent suite. 
 
The succeeded command reports successful completion of task execution 
(and releases the task lock to the lockserver if necessary). It is 
automatically written to the end of task jobs scripts by cylc, except in 
the case of detaching tasks (suite.rc: 'manual completion = True'), in 
which case it must be called explicitly by final task scripting. 
 
Suite and task identity are determined from the task execution 
environment supplied by the suite (or by the single task 'submit' 
command, in which case case the message is just printed to stdout). 
 
See also: 
    cylc [task] message 
    cylc [task] started 
    cylc [task] failed 
 
Options: 
  -h, --help     show this help message and exit 
  -v, --verbose  Verbose output mode.

C.2.62 suite-state
 
Usage: cylc suite-state REG [OPTIONS] 
 
Print task states retrieved from a suite database; or (with --task, 
--cycle, and --status) poll until a given task reaches a given state. 
Polling is configurable with --interval and --max-polls; for a one-off 
check use --max-polls=1. The suite database does not need to exist at 
the time polling commences but allocated polls are consumed waiting for 
it (consider max-pollsinterval as an overall timeout). 
 
Note for non-cycling tasks --cycle=1 must be provided. 
 
For your own suites the database location is determined by your 
site/user config. For other suites, e.g. those owned by others, or 
mirrored suite databases, use --run-dir=DIR to specify the location. 
 
Example usage: 
  cylc suite-state REG --task=TASK --cycle=CYCLE --status=STATUS 
returns 0 if TASK.CYCLE reaches STATUS before the maximum number of 
polls, otherwise returns 1. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  -t TASK, --task=TASK  Specify a task to check the state of. 
  -c CYCLE, --cycle=CYCLE 
                        Specify the cycle to check task states for. 
  -d DIR, --run-dir=DIR 
                        The top level cylc run directory if non-standard. The 
                        database should be DIR/REG/cylc-suite.db. Use to 
                        interrogate suites owned by others, etc.; see note 
                        above. 
  -S STATUS, --status=STATUS 
                        Specify a particular status or triggering condition to 
                        check for. Valid triggering conditions to check for 
                        include: 'fail', 'finish', 'start', 'submit' and 
                        'succeed'. Valid states to check for include: 
                        'failed', 'held', 'queued', 'ready', 'retrying', 
                        'runahead', 'running', 'submit-failed', 'submit- 
                        retrying', 'submitted', 'succeeded' and 'waiting'. 
  --max-polls=INT       Maximum number of polls (default 10). 
  --interval=SECS       Polling interval in seconds (default 60). 
  --user=USER           Other user account name. This results in command 
                        reinvocation on the remote account. 
  --host=HOST           Other host name. This results in command reinvocation 
                        on the remote account. 
  -v, --verbose         Verbose output mode.

C.2.63 test-battery
 
USAGE: cylc test-battery [OPTIONS] [SUBSET] 
 
Run a battery of tests held under $CYLC_DIR/tests/. If SUBSET is specified 
then only run the tests in $CYLC_DIR/tests/SUBSET. 
 
Some of the tests use suites which submit test jobs to a task host and 
user account taken from the environment: 
  $CYLC_TEST_TASK_HOST 
  $CYLC_TEST_TASK_OWNER 
These default to localhost and $USER. Passwordless ssh must be 
configured to the task host account (even if it is local). 
 
For passed test suites, log files and suite run directories are automatically 
cleaned up on the suite host, but not on remote task hosts. Test suites that 
fail are kept in the cylc-run directory to allow manual interrogation. 
 
For more information see "Reference Tests" in the User Guide. 
 
Options: 
  -h, --help   Print this help message and exit. 
 
Supports all the options of "prove".

C.2.64 test-db
 
USAGE: cylc [admin] test-db [--help] 
A thorough test of suite registration database functionality. 
Options: 
  --help   Print this usage message.

C.2.65 trigger
 
Usage: cylc [control] trigger [OPTIONS] REG MATCH TAG 
 
Manually trigger tasks. Triggering an unqueued task sets it "ready to 
run" and queues it for job submission (cylc internal queues). If the 
queue is not limited the task will submit immediately, otherwise it will 
submit when released by its queue. Triggering a queued task overrides the 
queue limiting mechanism and causes the task to submit immediately (be 
aware that this results in a greater number of active tasks than the 
queue limit specifies). 
 
For matching multiple tasks or families at once note that MATCH is 
interpreted as a full regular expression, not a simple shell glob. 
 
Arguments: 
   REG                 Suite name 
   MATCH               Task or family name matching regular expression 
   TAG                 Task cycle time or integer tag 
 
Options: 
  -h, --help          show this help message and exit 
  --user=USER         Other user account name. This results in command 
                      reinvocation on the remote account. 
  --host=HOST         Other host name. This results in command reinvocation on 
                      the remote account. 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=PATH           Alternative suite registration database location, 
                      defaults to $HOME/.cylc/REGDB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  -f, --force         Do not ask for confirmation before acting. Note that it 
                      is not necessary to use this option if interactive 
                      command prompts have been disabled in the site/user 
                      config files. 
  -m, --family        Match members of named families rather than tasks.

C.2.66 unregister
 
Usage: cylc [db] unregister [OPTIONS] REGEX 
 
Remove one or more suites from your suite database. The REGEX pattern 
must match whole suite names to avoid accidental de-registration of 
partial matches (e.g. 'bar.baz' will not match 'foo.bar.baz'). 
 
Associated suite definition directories will not be deleted unless the 
'-d,--delete' option is used. 
 
Arguments: 
   REGEX               Regular expression to match suite names. 
 
Options: 
  -h, --help     show this help message and exit 
  -d, --delete   Delete the suite definition directory too (!DANGEROUS!). 
  -f, --force    Don't ask for confirmation before deleting suite definitions. 
  --user=USER    Other user account name. This results in command reinvocation 
                 on the remote account. 
  --host=HOST    Other host name. This results in command reinvocation on the 
                 remote account. 
  -v, --verbose  Verbose output mode. 
  --debug        Run suites in non-daemon mode, and show exception tracebacks. 
  --db=PATH      Alternative suite registration database location, defaults to 
                 $HOME/.cylc/REGDB.

C.2.67 upgrade-db
 
Usage: cylc upgrade-db 
 
Upgrade a pre-cylc-5.4 suite name database to the new cylc-5.4+ format. 
This will create a new-format DB if necessary, or if one already exists 
it will transfer old registrations to the new DB so long as the suite 
names do not conflict. It is safe to run this utility multiple times. 
 
Prior to cylc-5.4 the suite name registration DB was a Python pickle 
file stored at $HOME/.cylc/DB.  Since cylc-5.4 it is a directory 
$HOME/.cylc/REGDB/ containing one file per registered suite. The 
filenames are the suite names, and the file contains key=value pairs: 
  shell$ cat $HOME/.cylc/REGDB/my.suite 
  title=my suite title 
  path=/path/to/my/suite/ 
 
Options: 
  -h, --help   show this help message and exit 
  --from=PATH  Path to pre-cylc-5.4 db; default:/home/oliverh/.cylc/DB 
  --to=PATH    Path to new cylc-5.4+ db; default:/home/oliverh/.cylc/REGDB

C.2.68 validate
 
Usage: cylc [prep] validate [OPTIONS] SUITE 
 
Validate a suite definition against the official specification 
files held in $CYLC_DIR/conf/suiterc/. 
 
If the suite definition uses include-files reported line numbers 
will correspond to the inlined version seen by the parser; use 
'cylc view -i,--inline SUITE' for comparison. 
 
Arguments: 
   SUITE               Suite name or path 
 
Options: 
  -h, --help            show this help message and exit 
  --strict              Fail any use of unsafe or experimental features. 
                        Currently this just means naked dummy tasks (tasks 
                        with no corresponding runtime section) as these may 
                        result from unintentional typographic errors in task 
                        names. 
  --no-write            Don't attempt to write out the 'suite.rc.processed' 
                        file to the suite definition directory. 
  --user=USER           Other user account name. This results in command 
                        reinvocation on the remote account. 
  --host=HOST           Other host name. This results in command reinvocation 
                        on the remote account. 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=PATH             Alternative suite registration database location, 
                        defaults to $HOME/.cylc/REGDB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a Jinja2 template variable in the 
                        suite definition. This option can be used multiple 
                        times on the command line.  WARNING: these settings do 
                        not persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line. 
  --set-file=FILE       Set the value of Jinja2 template variables in the 
                        suite definition from a file containing NAME=VALUE 
                        pairs (one per line). WARNING: these settings do not 
                        persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line.

C.2.69 view
 
Usage: cylc [prep] view [OPTIONS] SUITE 
 
View a read-only temporary copy of suite NAME's suite.rc file, in your 
editor, after optional include-file inlining and Jinja2 preprocessing. 
 
The edit process is spawned in the foreground as follows: 
  % <editor> suite.rc 
Where <editor> is defined in the cylc site and user config files 
($CYLC_DIR/conf/siterc/site.rc and $HOME/.cylc/user.rc). 
 
For remote host or owner, the suite will be printed to stdout unless 
the '-g,--gui' flag is used to spawn a remote GUI edit session. 
 
See also 'cylc [prep] edit'. 
 
Arguments: 
   SUITE               Suite name or path 
 
Options: 
  -h, --help            show this help message and exit 
  -i, --inline          Inline include-files. 
  -j, --jinja2          View the suite after Jinja2 template processing 
                        (implies '-i' as well). 
  -m, --mark            (With '-i') Mark inclusions in the left margin. 
  -l, --label           (With '-i') Label file inclusions with the file name. 
                        Line numbers will not correspond to those reported by 
                        the parser. 
  --single              (With '-i') Inline only the first instances of any 
                        multiply-included files. Line numbers will not 
                        correspond to those reported by the parser. 
  -c, --cat             Concatenate continuation lines (line numbers will not 
                        correspond to those reported by the parser). 
  -g, --gui             Force use of the configured GUI editor. 
  --stdout              Print the suite definition to stdout. 
  --mark-for-edit       (With '-i') View file inclusion markers as for 'cylc 
                        edit --inline'. 
  --user=USER           Other user account name. This results in command 
                        reinvocation on the remote account. 
  --host=HOST           Other host name. This results in command reinvocation 
                        on the remote account. 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=PATH             Alternative suite registration database location, 
                        defaults to $HOME/.cylc/REGDB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a Jinja2 template variable in the 
                        suite definition. This option can be used multiple 
                        times on the command line.  WARNING: these settings do 
                        not persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line. 
  --set-file=FILE       Set the value of Jinja2 template variables in the 
                        suite definition from a file containing NAME=VALUE 
                        pairs (one per line). WARNING: these settings do not 
                        persist across suite restarts; they need to be set 
                        again on the "cylc restart" command line.

C.2.70 warranty
 
 
USAGE: cylc [license] warranty [--help] 
   Cylc is released under the GNU General Public License v3.0 
This command prints the GPL v3.0 disclaimer of warranty. 
Options: 
  --help   Print this usage message.

D The Cylc Lockserver

Each cylc user can optionally run his/her own lockserver to prevent accidental invocation of multiple instances of the same suite or task at the same time. The suite and task locks brokered by the lockserver are analogous to traditional lock files, but they work across a network, even for distributed suites containing tasks that start executing on one host and finish on another.

Accidental invocation of multiple instances of the same suite or task at the same time can have serious consequences, so use of the lockserver should be considered for important operational suites, but it may be considered an unnecessary complication for general less critical usage, so it is currently disabled by default.

To enable the lockserver:

 
use lockserver = True

The suite will now abort at start-up if it cannot connect to the lockserver. To start your lockserver daemon,

 
shell$ cylc lockserver start

To check that it is running,

 
shell$ cylc lockserver status

For detailed usage information,

 
shell$ cylc lockserver --help

There is a command line client interface,

 
shell$ cylc lockclient --help

for interrogating the lockserver and managing locks manually (e.g. releasing locks if a suite was killed before it could clean up after itself).

To watch suite locks being acquired and released as a suite runs,

 
shell$ watch cylc lockclient --print

E The gcylc Graph View

The graph view in the gcylc GUI has the advantage that it shows the structure of a suite very clearly as it evolves. It works remarkably well even for very large suites (up to several hundred tasks or more) but because the graphviz engine does a new global layout every time the graph changes the layout is often not very stable. This may not be a solvable problem even in principle as it seems likely that making continual incremental changes to an existing graph without redoing the global layout would inevitably result in a horrible mess.

The following features of the graph view, however, help mitigate the the jumping layout problem:

F Cylc Project README File

 
#C: THIS FILE IS PART OF THE CYLC SUITE ENGINE. 
#C: Copyright (C) 2008-2013 Hilary Oliver, NIWA 
#C: 
#C: This program is free software: you can redistribute it and/or modify 
#C: it under the terms of the GNU General Public License as published by 
#C: the Free Software Foundation, either version 3 of the License, or 
#C: (at your option) any later version. 
#C: 
#C: This program is distributed in the hope that it will be useful, 
#C: but WITHOUT ANY WARRANTY; without even the implied warranty of 
#C: MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the 
#C: GNU General Public License for more details. 
#C: 
#C: You should have received a copy of the GNU General Public License 
#C: along with this program.  If not, see <http://www.gnu.org/licenses/>. 
 
This is the Cylc Suite Engine, version: cylc -v 
 
Access to cylc: 
  % export PATH=/path/to/cylc/bin:$PATH 
  % cylc help 
  % gcylc & 
 
Documentation: 
   Installation: /path/to/cylc/INSTALL 
   User Guide: /path/to/cylc/doc/index.html 
   Project Home Page: http://cylc.github.com/cylc 
 
Code Contributors (git shortlog -s -n): 
   Hilary Oliver 
   Matt Shin 
   Dave Matthews 
   Ben Fitzpatrick 
   Andrew Clark 
   Luis Kornblueh 
   Scott Wales 
   Kevin Pulo 
   Annette Osprey 
   Tim Whitcomb 
   Alex Reinecke

G Cylc Project INSTALL File

 
 
Cylc can run from an unpacked release tree (at a particular version) or 
a git repository clone (which can be updated to the latest version at 
will). 
 
Consider installing into version-labelled sub-directories to enable 
parallel installation of new cylc versions as they are released, e.g.: 
 
/home/cylcadmin/cylc/ 
                 cylc-5.2.0/ 
                 cylc-5.3.0/ 
                 cylc-5.3.1/ 
                 cylc.git/              # repository 
                 latest -> cylc-5.3.1   # symlink 
 
Once installed just put the cylc bin directory in your $PATH variable: 
  % export PATH=/home/cylcadmin/latest/bin:$PATH 
 
INSTALLING FROM A SOURCE TARBALL: 
 
  % tar xzf cylc-x.y.z.tar.gz 
  % cd cylc-x.y.z 
  % export PATH=$PWD/bin:$PATH 
  % make 
 
The 'make' process does the following: 
 
  1) a VERSION file is created containing the cylc version string, e.g. 
  5.1.0. This is taken from the name of the parent directory - DO NOT 
  CHANGE THE NAME OF THE UNPACKED SOURCE TREE before running 'make'. 
 
  2) the Cylc User Guide is generated from LaTeX source files in doc/: 
    if you have pdflatex installed, a PDF version is generated 
    if you have tex4ht and ImageMagick convert installed, two HTML 
     versions (single- and multi-page) are generated 
    a doc/index.html is created with links to the generated docs. 
 
  3) The "orrdereddict" Python module will be built from its C language 
  source files, in ext/ordereddict-0.4.5. This is not essential - a 
  Python implementation will be used by cylc if necessary. Currently, 
  if the build is successful you must install the module yourself into 
  your $PYTHONPATH. 
 
You may want to maintain successive versions of cylc under the same top 
level directory: 
    TOP/cylc-5.1.0/ 
    TOP/cylc-5.2.3. 
    # etc. 
 
To allow users to run differing versions of cylc you will need to install the 
"cylc-wrapper" script located in the admin directory available centrally so 
that it is available in the PATH of a normal user. N.B. You will need to change 
"/opt" in the script to your cylc home root e.g. ~central_account. 
 
INSTALLING FROM A GIT REPOSITORY CLONE: 
 
  1) To get a clone that can track the official repository: 
 
     % git clone git://github.com/cylc/cylc.git 
     % cd cylc 
     % make  # build ordereddict and User Guide (as above) 
  To pull in the latest changes: 
     % git pull origin master 
     % make # remake documentation in case of changes 
 
  2) To participate in cylc development: fork cylc on github, clone your 
  own fork locally, commit changes in a feature branch and then push it 
  to your fork and issue a pull request to the cylc maintainer.

H Cylc Development History - Major Changes

I Pyro

Pyro (Python Remote Objects) is a widely used open source objected oriented Remote Procedure Call technology developed by Irmen de Jong.

Earlier versions of cylc used the Pyro Nameserver to marshal communication between client programs (tasks, commands, viewers, etc.) and their target suites. This worked well, but in principle it provided a route for one suite or user on the subnet to bring down all running suites by killing the nameserver. Consequently cylc now uses Pyro simply as a lightweight object oriented wrapper for direct network socket communication between client programs and their target suites - all suites are thus entirely isolated from one another.

J GNU GENERAL PUBLIC LICENSE v3.0

Copyright  2007 Free Software Foundation, Inc. http://fsf.org/

Everyone is permitted to copy and distribute verbatim copies of this

license document, but changing it is not allowed.

Preamble

The GNU General Public License is a free, copyleft license for software and other kinds of works.

The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program–to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.

When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.

To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.

For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.

Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.

For the developers’ and authors’ protection, the GPL clearly explains that there is no warranty for this free software. For both users’ and authors’ sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.

Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users’ freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.

Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.

The precise terms and conditions for copying, distribution and modification follow.

Terms and Conditions

  1. Definitions.

    “This License” refers to version 3 of the GNU General Public License.

    “Copyright” also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.

    “The Program” refers to any copyrightable work licensed under this License. Each licensee is addressed as “you”. “Licensees” and “recipients” may be individuals or organizations.

    To “modify” a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a “modified version” of the earlier work or a work “based on” the earlier work.

    A “covered work” means either the unmodified Program or a work based on the Program.

    To “propagate” a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.

    To “convey” a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.

    An interactive user interface displays “Appropriate Legal Notices” to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.

  2. Source Code.

    The “source code” for a work means the preferred form of the work for making modifications to it. “Object code” means any non-source form of a work.

    A “Standard Interface” means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.

    The “System Libraries” of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A “Major Component”, in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.

    The “Corresponding Source” for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work’s System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.

    The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.

    The Corresponding Source for a work in source code form is that same work.

  3. Basic Permissions.

    All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.

    You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.

    Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.

  4. Protecting Users’ Legal Rights From Anti-Circumvention Law.

    No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.

    When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work’s users, your or third parties’ legal rights to forbid circumvention of technological measures.

  5. Conveying Verbatim Copies.

    You may convey verbatim copies of the Program’s source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.

    You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.

  6. Conveying Modified Source Versions.

    You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:

    1. The work must carry prominent notices stating that you modified it, and giving a relevant date.
    2. The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to “keep intact all notices”.
    3. You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
    4. If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.

    A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an “aggregate” if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation’s users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.

  7. Conveying Non-Source Forms.

    You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:

    1. Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
    2. Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
    3. Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
    4. Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
    5. Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.

    A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.

    A “User Product” is either (1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, “normally used” refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.

    “Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.

    If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).

    The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.

    Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.

  8. Additional Terms.

    “Additional permissions” are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.

    When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.

    Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:

    1. Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
    2. Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
    3. Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
    4. Limiting the use for publicity purposes of names of licensors or authors of the material; or
    5. Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
    6. Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.

    All other non-permissive additional terms are considered “further restrictions” within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.

    If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.

    Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.

  9. Termination.

    You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).

    However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

    Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.

    Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.

  10. Acceptance Not Required for Having Copies.

    You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.

  11. Automatic Licensing of Downstream Recipients.

    Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.

    An “entity transaction” is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party’s predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.

    You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.

  12. Patents.

    A “contributor” is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor’s “contributor version”.

    A contributor’s “essential patent claims” are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, “control” includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.

    Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor’s essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.

    In the following three paragraphs, a “patent license” is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To “grant” such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.

    If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. “Knowingly relying” means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient’s use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.

    If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.

    A patent license is “discriminatory” if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.

    Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.

  13. No Surrender of Others’ Freedom.

    If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.

  14. Use with the GNU Affero General Public License.

    Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.

  15. Revised Versions of this License.

    The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.

    Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License “or any later version” applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.

    If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy’s public statement of acceptance of a version permanently authorizes you to choose that version for the Program.

    Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.

  16. Disclaimer of Warranty.

    THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

  17. Limitation of Liability.

    IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

  18. Interpretation of Sections 15 and 16.

    If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.

    End of Terms and Conditions

    How to Apply These Terms to Your New Programs

    If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.

    To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the “copyright” line and a pointer to where the full notice is found.

    <one line to give the program's name and a brief idea of what it does.>  
     
    Copyright (C) <textyear>  <name of author>  
     
    This program is free software: you can redistribute it and/or modify  
    it under the terms of the GNU General Public License as published by  
    the Free Software Foundation, either version 3 of the License, or  
    (at your option) any later version.  
     
    This program is distributed in the hope that it will be useful,  
    but WITHOUT ANY WARRANTY; without even the implied warranty of  
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the  
    GNU General Public License for more details.  
     
    You should have received a copy of the GNU General Public License  
    along with this program.  If not, see <http://www.gnu.org/licenses/>.

    Also add information on how to contact you by electronic and paper mail.

    If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode:

    <program>  Copyright (C) <year>  <name of author>  
     
    This program comes with ABSOLUTELY NO WARRANTY; for details type ‘show w'.  
    This is free software, and you are welcome to redistribute it  
    under certain conditions; type ‘show c' for details.

    The hypothetical commands show w and show c should show the appropriate parts of the General Public License. Of course, your program’s commands might be different; for a GUI interface, you would use an “about box”.

    You should also get your employer (if you work as a programmer) or school, if any, to sign a “copyright disclaimer” for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see http://www.gnu.org/licenses/.

    The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read http://www.gnu.org/philosophy/why-not-lgpl.html.

1Future plans for EcoConnect include additional deterministic regional weather forecasts and a statistical ensemble.

2An OR operator on the right doesn’t make much sense: if “B or C” triggers off A, what exactly should cylc do when A finishes?

3In NWP forecast analysis suites parts of the observation processing and data assimilation subsystem will typically also depend on model background fields generated by the previous forecast.

4A warm cycling model that only writes out one set of restart files, for the very next cycle, does not need to be declared sequential because this early triggering problem cannot arise.

5Note that $CYLC_SUITE_ENVIRONMENT is a string containing embedded newline characters and it has to be handled accordingly. In the bash shell, for instance, it should be echoed in quotes to avoid concatenation to a single line.

6If you accidentally delete a port file while a suite is running, use cylc scan to determine the port number then use it on the command line (--port) or rewrite the port file manually.

7The cylc submit command runs a single task exactly as its suite would, in terms of both job submission method and execution environment.

8If you copy a suite using cylc commands or GUI the entire suite definition directory will be copied.

9Spawning any earlier than this brings no advantage in terms of functional parallelism and would cause uncontrolled proliferation of waiting tasks.

10This is because you don’t want Model[T] waiting around to trigger off Model[T-12] if Model[T-6] has not finished yet. If Model is forced to be sequential this can’t happen because Model[T] won’t exist in the suite until Model[T-6] has finished. But if Model[T-6] fails, it can be spawned-and-removed from the suite so that Model[T] can then trigger off Model[T-12], which is the correct behaviour.